Discussion:
[uml-devel] [uml-user] UML on WSL
Richard Weinberger
2017-05-07 14:11:36 UTC
Permalink
Thomas,
Hi,
Did anybody try to run UML on the new Windows 10 subsystem for Linux? I
wonder what missing functions may hinder to run UML on WSL?
No idea.
Can you try? I don't have access to a Windows 10 system right now
where I could test.
--
Thanks,
//richard
Thomas Meyer
2017-05-08 15:02:35 UTC
Permalink
Post by Richard Weinberger
Thomas,
Hi,
Hi,
Post by Richard Weinberger
Did anybody try to run UML on the new Windows 10 subsystem for Linux? I
wonder what missing functions may hinder to run UML on WSL?
No idea.
Can you try? I don't have access to a Windows 10 system right now
where I could test.
Sadly, UML executable bails out very early. it Looks like WSL is
missing some PTRACE stuff:

***@DESKTOP-DQBDJ0U:/mnt/c/Users/thomas/Downloads$ ./linux
Core dump limits :
soft - NONE
hard - NONE
Checking that ptrace can change system call numbers...check_ptrace:
PTRACE_OLDSETOPTIONS failed: Invalid Argument
Post by Richard Weinberger
--
Thanks,
//richard
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
User-mode-linux-devel mailing list
https://lists.sourceforge.net/lists/listinfo/user-mode-linux-devel
Richard Weinberger
2017-05-08 15:05:10 UTC
Permalink
Thomas,
Post by Thomas Meyer
soft - NONE
hard - NONE
Checking that ptrace can change system call numbers...check_ptrace: PTRACE_OLDSETOPTIONS failed: Invalid Argument
We could figure how to report issues to WSL, create self-hosting unit tests and ask them to add/fix
these features.

Thanks,
//richard
Thomas Meyer
2017-05-08 15:32:26 UTC
Permalink
Post by Richard Weinberger
Thomas,
Post by Thomas Meyer
soft - NONE
hard - NONE
Checking that ptrace can change system call numbers...check_ptrace: PTRACE_OLDSETOPTIONS failed: Invalid Argument
We could figure how to report issues to WSL, create self-hosting unit tests and ask them to add/fix
these features.
Turns out there was already a bug report by somebody about missing UML support in WSL:

https://github.com/Microsoft/BashOnWindows/issues/1692
Post by Richard Weinberger
Thanks,
//richard
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
User-mode-linux-devel mailing list
https://lists.sourceforge.net/lists/listinfo/user-mode-linux-devel
Richard Weinberger
2017-05-08 15:35:04 UTC
Permalink
Thomas,
Post by Thomas Meyer
Post by Richard Weinberger
We could figure how to report issues to WSL, create self-hosting unit tests and ask them to add/fix
these features.
https://github.com/Microsoft/BashOnWindows/issues/1692
Ah, there are tons of ptrace() features missing.
UML is a major user of ptrace(), worse than GDB. ;-\

Thanks,
//richard
Thomas Meyer
2017-05-08 15:40:17 UTC
Permalink
Post by Richard Weinberger
Thomas,
Post by Thomas Meyer
Post by Richard Weinberger
We could figure how to report issues to WSL, create self-hosting unit tests and ask them to add/fix
these features.
https://github.com/Microsoft/BashOnWindows/issues/1692
Ah, there are tons of ptrace() features missing.
UML is a major user of ptrace(), worse than GDB. ;-\
Yes, I know.

Probably also worh quoting the discussion from the relevant GH issue regarding OLDPTRACE :

"
Also, for a little context, the onlysoftware I can find on the planet that cares is User Mode Linux. Unless someone tries to run some statically linked strace or maybe gdb binary from the 2.4 era, this will simply never be hit on WSL in 2017. UML seems to still care, entirely academically, to maintain binary compatibility with 2.4. You can think of it as UML never buying into the idea the value changed, while everyone else moved"
Post by Richard Weinberger
Thanks,
//richard
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
User-mode-linux-devel mailing list
https://lists.sourceforge.net/lists/listinfo/user-mode-linux-devel
Richard Weinberger
2017-05-08 16:07:45 UTC
Permalink
Thomas,
"
Also, for a little context, the /only/software I can find /on the planet/ that cares is User Mode Linux. Unless someone tries to run some statically linked |strace| or
maybe |gdb| binary from the 2.4 era, this will simply never be hit on WSL in 2017. UML seems to still care, entirely academically, to maintain binary compatibility with 2.4. You
can think of it as UML never buying into the idea the value changed, while everyone else moved"
-ENOPATCH. :-)

Thanks,
//richard
Thomas Meyer
2017-05-08 16:09:57 UTC
Permalink
Post by Thomas Meyer
Post by Richard Weinberger
Thomas,
Post by Thomas Meyer
Post by Richard Weinberger
We could figure how to report issues to WSL, create self-hosting unit tests and ask them to add/fix
these features.
https://github.com/Microsoft/BashOnWindows/issues/1692
Ah, there are tons of ptrace() features missing.
UML is a major user of ptrace(), worse than GDB. ;-\
Yes, I know.
"
Also, for a little context, the onlysoftware I can find on the planet that cares is User Mode Linux. Unless someone tries to run some statically linked strace or maybe gdb binary from the 2.4 era, this will simply never be hit on WSL in 2017. UML seems to still care, entirely academically, to maintain binary compatibility with 2.4. You can think of it as UML never buying into the idea the value changed, while everyone else moved"
Or asked he other way around:

Is somewhere documented what's the minimum host kernel version that a UML kernel will run on?

E.g.:
building a UML kernel from 4.11 will need a host kernel version 2.6.18 with features x, y and z enabled?
Post by Thomas Meyer
Post by Richard Weinberger
Thanks,
//richard
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
User-mode-linux-devel mailing list
https://lists.sourceforge.net/lists/listinfo/user-mode-linux-devel
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
User-mode-linux-user mailing list
https://lists.sourceforge.net/lists/listinfo/user-mode-linux-user
Richard Weinberger
2017-05-08 16:15:00 UTC
Permalink
Thomas,
Post by Thomas Meyer
Is somewhere documented what's the minimum host kernel version that a UML kernel will run on?
building a UML kernel from 4.11 will need a host kernel version 2.6.18 with features x, y and z enabled?
Not really. But let's be realistic, we don't have to support a 2.4 host.
UML should run on any kernel of a supported distro.

On the other hand, if we can help WSL with a small change to UML, I'll happily apply such a patch.

Thanks,
//richard
Anton Ivanov
2017-05-08 16:21:23 UTC
Permalink
So far it is late 2.6. The high res timer subsystem settled fully
somewhere circa 2.6.10 if memory serves me right. One of those lovely
kernels which had VM collapse bugs :)

I am finishing testing the vector IO drivers and epoll irq controller
for them you will need 3.0 onwards

In fact, I have run into brokennes in the core net_sched in 4.11 so I
cannot fully test the xmit path in the new network drivers, otherwise I
would have sent them out by now.

I have asked on linux-net, but no answer so far, trying to debug it
myself. It is seriously broken (both for us and for virtio so kvm/qemu
should be affected too).

A.
Post by Richard Weinberger
Thomas,
Post by Thomas Meyer
Is somewhere documented what's the minimum host kernel version that a UML kernel will run on?
building a UML kernel from 4.11 will need a host kernel version 2.6.18 with features x, y and z enabled?
Not really. But let's be realistic, we don't have to support a 2.4 host.
UML should run on any kernel of a supported distro.
On the other hand, if we can help WSL with a small change to UML, I'll happily apply such a patch.
Thanks,
//richard
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
User-mode-linux-devel mailing list
https://lists.sourceforge.net/lists/listinfo/user-mode-linux-devel
Richard Weinberger
2017-05-08 19:45:34 UTC
Permalink
Thomas,

Can you please give the attached patch a try?

Thanks,
//richard
Thomas Meyer
2017-05-09 08:15:36 UTC
Permalink
Post by Richard Weinberger
Thomas,
Can you please give the attached patch a try?
Hi,

attached patch work correctly under Linux. But no change under WSL. As
stated in the relevant GH issue, there seems to be far more road
blockers to make UML work under WSL.

With you patch I get under WSL:

***@DESKTOP-DQBDJ0U:/mnt/c/Users/thomas/VmShare$ ./linux
Core dump limits :
soft - NONE
hard - NONE
Checking that ptrace can change system call numbers...check_ptrace :
failed to modify system call: Invalid Argument
Post by Richard Weinberger
Thanks,
//richard
Richard Weinberger
2017-05-09 13:11:51 UTC
Permalink
Thomas,
attached patch work correctly under Linux. But no change under WSL. As stated in the relevant GH issue, there seems to be far more road blockers to make UML work under WSL.
soft - NONE
hard - NONE
Checking that ptrace can change system call numbers...check_ptrace : failed to modify system call: Invalid Argument
Okay, now it fails later.
UML needs to cancel syscalls on the host side, it does so by turning them into a getpid() which has no side
effects and, on non-ancient systems, by using PTRACE_SYSEMU.
Let's figure whether they support PTRACE_SYSEMU, can you test the attached patch?

Also please test segv1.c, it tests whether WSL allows us to handle page faults in userspace.
It should output this:
SIGSEGV at 0xdeadbeef, fixing up
x=3, &x=0xdeadbeef

IOW we write to 0xdeadbeef, catch the fault and fix it.

Thanks,
//richard
Thomas Meyer
2017-05-09 17:25:08 UTC
Permalink
Post by Richard Weinberger
Thomas,
Post by Thomas Meyer
attached patch work correctly under Linux. But no change under WSL.
As stated in the relevant GH issue, there seems to be far more road
blockers to make UML work under WSL.
soft - NONE
hard - NONE
Checking that ptrace can change system call numbers...check_ptrace
: failed to modify system call: Invalid Argument
Okay, now it fails later.
UML needs to cancel syscalls on the host side, it does so by
turning them into a getpid() which has no side
effects and, on non-ancient systems, by using PTRACE_SYSEMU.
Let's figure whether they support PTRACE_SYSEMU, can you test the attached patch?
okay, it now proceeds even more:

***@DESKTOP-DQBDJ0U:/mnt/c/Users/thomas/VmShare$ ./linux mem=128m
Core dump limits :
soft - NONE
hard - NONE
Checking syscall emulation patch for ptrace...missing
Checking environment variables for a tempdir...none found
Checking if /dev/shm is on tmpfs...OK
Checking PROT_EXEC mmap in /dev/shm...OK
Adding 3665920 bytes to physical memory to account for exec-shield gap
kmsg_dump:
<3>[ 2.050000] Slab cache with size 1056 has lost its name
<3>[ 2.050000] Slab cache with size 160 has lost its name
<3>[ 2.050000] Slab cache with size 1440 has lost its name
<3>[ 2.050000] Slab cache with size 168 has lost its name
<3>[ 2.050000] Slab cache with size 432 has lost its name
<3>[ 2.050000] Slab cache with size 984 has lost its name
<3>[ 2.050000] Slab cache with size 320 has lost its name
<3>[ 2.050000] Slab cache with size 72 has lost its name
<3>[ 2.050000] Slab cache with size 72 has lost its name
<3>[ 2.050000] Slab cache with size 112 has lost its name
<3>[ 2.050000] Slab cache with size 296 has lost its name
<3>[ 2.050000] Slab cache with size 104 has lost its name
<3>[ 2.050000] Slab cache with size 56 has lost its name
<3>[ 2.050000] Slab cache with size 184 has lost its name
<3>[ 2.050000] Slab cache with size 1464 has lost its name
<3>[ 2.050000] Slab cache with size 776 has lost its name
<3>[ 2.050000] Slab cache with size 1408 has lost its name
<3>[ 2.050000] Slab cache with size 2216 has lost its name
<3>[ 2.050000] Slab cache with size 6000 has lost its name
<3>[ 2.050000] Slab cache with size 80 has lost its name
<3>[ 2.050000] Slab cache with size 176 has lost its name
<3>[ 2.050000] Slab cache with size 40 has lost its name
<3>[ 2.050000] Slab cache with size 88 has lost its name
<3>[ 2.050000] Slab cache with size 48 has lost its name
<3>[ 2.050000] Slab cache with size 576 has lost its name
<3>[ 2.050000] Slab cache with size 616 has lost its name
<3>[ 2.050000] Slab cache with size 8192 has lost its name
<3>[ 2.050000] Slab cache with size 4096 has lost its name
<3>[ 2.050000] Slab cache with size 2048 has lost its name
<3>[ 2.050000] Slab cache with size 1024 has lost its name
<3>[ 2.050000] Slab cache with size 512 has lost its name
<3>[ 2.050000] Slab cache with size 256 has lost its name
<3>[ 2.050000] Slab cache with size 192 has lost its name
<3>[ 2.050000] Slab cache with size 128 has lost its name
<3>[ 2.050000] Slab cache with size 96 has lost its name
<3>[ 2.050000] Slab cache with size 64 has lost its name
<3>[ 2.050000] Slab cache with size 32 has lost its name
<3>[ 2.050000] Slab cache with size 16 has lost its name
<3>[ 2.050000] Slab cache with size 8 has lost its name
<3>[ 2.050000] Slab cache with size 128 has lost its name
<3>[ 2.050000] Slab cache with size 200 has lost its name
<7>[ 2.050000] SELinux: Registering netfilter hooks
<6>[ 2.050000] cryptomgr_test (23) used greatest stack depth: 6216
bytes left
<6>[ 2.050000] jitterentropy: Initialization failed with host not
compliant with requirements: 2
<6>[ 2.050000] io scheduler noop registered
<6>[ 2.050000] io scheduler deadline registered (default)
<3>[ 2.050000] Slab cache with size 1416 has lost its name
<3>[ 2.050000] Slab cache with size 1304 has lost its name
<3>[ 2.050000] Slab cache with size 464 has lost its name
<3>[ 2.050000] Slab cache with size 1344 has lost its name
<3>[ 2.050000] Slab cache with size 1784 has lost its name
<3>[ 2.050000] Slab cache with size 800 has lost its name
<3>[ 2.050000] Slab cache with size 1032 has lost its name
<3>[ 2.050000] Slab cache with size 1288 has lost its name
<3>[ 2.050000] Slab cache with size 32 has lost its name
<3>[ 2.050000] Slab cache with size 1040 has lost its name
<3>[ 2.050000] Slab cache with size 48 has lost its name
<3>[ 2.050000] Slab cache with size 112 has lost its name
<3>[ 2.050000] Slab cache with size 16 has lost its name
<3>[ 2.050000] Slab cache with size 32 has lost its name
<3>[ 2.050000] Slab cache with size 2152 has lost its name
<3>[ 2.050000] Slab cache with size 128 has lost its name
<3>[ 2.050000] Slab cache with size 168 has lost its name
<3>[ 2.050000] Slab cache with size 64 has lost its name
<3>[ 2.050000] Slab cache with size 40 has lost its name
<3>[ 2.050000] Slab cache with size 56 has lost its name
<3>[ 2.050000] Slab cache with size 24 has lost its name
<3>[ 2.050000] Slab cache with size 96 has lost its name
<3>[ 2.050000] Slab cache with size 768 has lost its name
<3>[ 2.050000] Slab cache with size 440 has lost its name
<3>[ 2.050000] Slab cache with size 144 has lost its name
<3>[ 2.050000] Slab cache with size 696 has lost its name
<3>[ 2.050000] Slab cache with size 280 has lost its name
<3>[ 2.050000] Slab cache with size 1824 has lost its name
<3>[ 2.050000] Slab cache with size 296 has lost its name
<3>[ 2.050000] Slab cache with size 280 has lost its name
<3>[ 2.050000] Slab cache with size 352 has lost its name
<3>[ 2.050000] Slab cache with size 2704 has lost its name
<3>[ 2.050000] Slab cache with size 416 has lost its name
<3>[ 2.050000] Slab cache with size 152 has lost its name
<3>[ 2.050000] Slab cache with size 2112 has lost its name
<3>[ 2.050000] Slab cache with size 216 has lost its name
<3>[ 2.050000] Slab cache with size 1032 has lost its name
<3>[ 2.050000] Slab cache with size 272 has lost its name
<3>[ 2.050000] Slab cache with size 120 has lost its name
<3>[ 2.050000] Slab cache with size 6056 has lost its name
<3>[ 2.050000] Slab cache with size 1208 has lost its name
<3>[ 2.050000] Slab cache with size 136 has lost its name
<3>[ 2.050000] Slab cache with size 328 has lost its name
<3>[ 2.050000] Slab cache with size 1056 has lost its name
<3>[ 2.050000] Slab cache with size 160 has lost its name
<3>[ 2.050000] Slab cache with size 1440 has lost its name
<3>[ 2.050000] Slab cache with size 168 has lost its name
<3>[ 2.050000] Slab cache with size 432 has lost its name
<3>[ 2.050000] Slab cache with size 984 has lost its name
<3>[ 2.050000] Slab cache with size 320 has lost its name
<3>[ 2.050000] Slab cache with size 72 has lost its name
<3>[ 2.050000] Slab cache with size 72 has lost its name
<3>[ 2.050000] Slab cache with size 112 has lost its name
<3>[ 2.050000] Slab cache with size 296 has lost its name
<3>[ 2.050000] Slab cache with size 104 has lost its name
<3>[ 2.050000] Slab cache with size 56 has lost its name
<3>[ 2.050000] Slab cache with size 184 has lost its name
<3>[ 2.050000] Slab cache with size 1464 has lost its name
<3>[ 2.050000] Slab cache with size 776 has lost its name
<3>[ 2.050000] Slab cache with size 1408 has lost its name
<3>[ 2.050000] Slab cache with size 2216 has lost its name
<3>[ 2.050000] Slab cache with size 6000 has lost its name
<3>[ 2.050000] Slab cache with size 80 has lost its name
<3>[ 2.050000] Slab cache with size 176 has lost its name
<3>[ 2.050000] Slab cache with size 40 has lost its name
<3>[ 2.050000] Slab cache with size 88 has lost its name
<3>[ 2.050000] Slab cache with size 48 has lost its name
<3>[ 2.050000] Slab cache with size 576 has lost its name
<3>[ 2.050000] Slab cache with size 616 has lost its name
<3>[ 2.050000] Slab cache with size 8192 has lost its name
<3>[ 2.050000] Slab cache with size 4096 has lost its name
<3>[ 2.050000] Slab cache with size 2048 has lost its name
<3>[ 2.050000] Slab cache with size 1024 has lost its name
<3>[ 2.050000] Slab cache with size 512 has lost its name
<3>[ 2.050000] Slab cache with size 256 has lost its name
<3>[ 2.050000] Slab cache with size 192 has lost its name
<3>[ 2.050000] Slab cache with size 128 has lost its name
<3>[ 2.050000] Slab cache with size 96 has lost its name
<3>[ 2.050000] Slab cache with size 64 has lost its name
<3>[ 2.050000] Slab cache with size 32 has lost its name
<3>[ 2.050000] Slab cache with size 16 has lost its name
<3>[ 2.050000] Slab cache with size 8 has lost its name
<3>[ 2.050000] Slab cache with size 128 has lost its name
<3>[ 2.050000] Slab cache with size 200 has lost its name
<3>[ 2.050000] Slab cache with size 240 has lost its name
<3>[ 2.050000] Slab cache with size 1416 has lost its name
<3>[ 2.050000] Slab cache with size 1304 has lost its name
<3>[ 2.050000] Slab cache with size 464 has lost its name
<3>[ 2.050000] Slab cache with size 1344 has lost its name
<3>[ 2.050000] Slab cache with size 1784 has lost its name
<3>[ 2.050000] Slab cache with size 800 has lost its name
<3>[ 2.050000] Slab cache with size 1032 has lost its name
<3>[ 2.050000] Slab cache with size 1288 has lost its name
<3>[ 2.050000] Slab cache with size 32 has lost its name
<3>[ 2.050000] Slab cache with size 1040 has lost its name
<3>[ 2.050000] Slab cache with size 48 has lost its name
<3>[ 2.050000] Slab cache with size 112 has lost its name
<3>[ 2.050000] Slab cache with size 16 has lost its name
<3>[ 2.050000] Slab cache with size 32 has lost its name
<3>[ 2.050000] Slab cache with size 2152 has lost its name
<3>[ 2.050000] Slab cache with size 128 has lost its name
<3>[ 2.050000] Slab cache with size 168 has lost its name
<3>[ 2.050000] Slab cache with size 64 has lost its name
<3>[ 2.050000] Slab cache with size 40 has lost its name
<3>[ 2.050000] Slab cache with size 56 has lost its name
<3>[ 2.050000] Slab cache with size 24 has lost its name
<3>[ 2.050000] Slab cache with size 96 has lost its name
<3>[ 2.050000] Slab cache with size 768 has lost its name
<3>[ 2.050000] Slab cache with size 440 has lost its name
<3>[ 2.050000] Slab cache with size 144 has lost its name
<3>[ 2.050000] Slab cache with size 696 has lost its name
<3>[ 2.050000] Slab cache with size 280 has lost its name
<3>[ 2.050000] Slab cache with size 1824 has lost its name
<3>[ 2.050000] Slab cache with size 296 has lost its name
<3>[ 2.050000] Slab cache with size 280 has lost its name
<3>[ 2.050000] Slab cache with size 352 has lost its name
<3>[ 2.050000] Slab cache with size 2704 has lost its name
<3>[ 2.050000] Slab cache with size 416 has lost its name
<3>[ 2.050000] Slab cache with size 152 has lost its name
<3>[ 2.050000] Slab cache with size 2112 has lost its name
<3>[ 2.050000] Slab cache with size 216 has lost its name
<3>[ 2.050000] Slab cache with size 1032 has lost its name
<3>[ 2.050000] Slab cache with size 272 has lost its name
<3>[ 2.050000] Slab cache with size 120 has lost its name
<3>[ 2.050000] Slab cache with size 6056 has lost its name
<3>[ 2.050000] Slab cache with size 1208 has lost its name
<3>[ 2.050000] Slab cache with size 136 has lost its name
<3>[ 2.050000] Slab cache with size 328 has lost its name
<3>[ 2.050000] Slab cache with size 1056 has lost its name
<3>[ 2.050000] Slab cache with size 160 has lost its name
<3>[ 2.050000] Slab cache with size 1440 has lost its name
<3>[ 2.050000] Slab cache with size 168 has lost its name
<3>[ 2.050000] Slab cache with size 432 has lost its name
<3>[ 2.050000] Slab cache with size 984 has lost its name
<3>[ 2.050000] Slab cache with size 320 has lost its name
<3>[ 2.050000] Slab cache with size 72 has lost its name
<3>[ 2.050000] Slab cache with size 72 has lost its name
<3>[ 2.050000] Slab cache with size 112 has lost its name
<3>[ 2.050000] Slab cache with size 296 has lost its name
<3>[ 2.050000] Slab cache with size 104 has lost its name
<3>[ 2.050000] Slab cache with size 56 has lost its name
<3>[ 2.050000] Slab cache with size 184 has lost its name
<3>[ 2.050000] Slab cache with size 1464 has lost its name
<3>[ 2.050000] Slab cache with size 776 has lost its name
<3>[ 2.050000] Slab cache with size 1408 has lost its name
<3>[ 2.050000] Slab cache with size 2216 has lost its name
<3>[ 2.050000] Slab cache with size 6000 has lost its name
<3>[ 2.050000] Slab cache with size 80 has lost its name
<3>[ 2.050000] Slab cache with size 176 has lost its name
<3>[ 2.050000] Slab cache with size 40 has lost its name
<3>[ 2.050000] Slab cache with size 88 has lost its name
<3>[ 2.050000] Slab cache with size 48 has lost its name
<3>[ 2.050000] Slab cache with size 576 has lost its name
<3>[ 2.050000] Slab cache with size 616 has lost its name
<3>[ 2.050000] Slab cache with size 8192 has lost its name
<3>[ 2.050000] Slab cache with size 4096 has lost its name
<3>[ 2.050000] Slab cache with size 2048 has lost its name
<3>[ 2.050000] Slab cache with size 1024 has lost its name
<3>[ 2.050000] Slab cache with size 512 has lost its name
<3>[ 2.050000] Slab cache with size 256 has lost its name
<3>[ 2.050000] Slab cache with size 192 has lost its name
<3>[ 2.050000] Slab cache with size 128 has lost its name
<3>[ 2.050000] Slab cache with size 96 has lost its name
<3>[ 2.050000] Slab cache with size 64 has lost its name
<3>[ 2.050000] Slab cache with size 32 has lost its name
<3>[ 2.050000] Slab cache with size 16 has lost its name
<3>[ 2.050000] Slab cache with size 8 has lost its name
<3>[ 2.050000] Slab cache with size 128 has lost its name
<3>[ 2.050000] Slab cache with size 200 has lost its name
<6>[ 2.050000] io scheduler cfq registered
<6>[ 2.050000] io scheduler mq-deadline registered
<4>[ 2.050000]
<4>[ 2.050000] Modules linked in:
<6>[ 2.050000] Pid: 1, comm: swapper Not tainted
4.11.0-00007-g594e0e4-dirty
<6>[ 2.050000] RIP: 0033:[<00007f2e5b961eae>]
<6>[ 2.050000] RSP: 0000000067c87c38 EFLAGS: 00010202
<6>[ 2.050000] RAX: 0000000068842000 RBX: 0000000067dabae0 RCX:
0000000000000001
<6>[ 2.050000] RDX: 0000000000008000 RSI: 000000006884a000 RDI:
0000000068842020
<6>[ 2.050000] RBP: 0000000067c87c70 R08: 0000000067dabae7 R09:
0000000067f73000
<6>[ 2.050000] R10: 0000000000000000 R11: 0000000000000000 R12:
0000000060040ae0
<6>[ 2.050000] R13: 0000000000000000 R14: 00000000604e7e90 R15:
0000000060886700
<0>[ 2.050000] Kernel panic - not syncing: Segfault with no mm
<4>[ 2.050000] CPU: 0 PID: 1 Comm: swapper Not tainted
4.11.0-00007-g594e0e4-dirty #5
<6>[ 2.050000] Stack:
<4>[ 2.050000] 604e65b1 200000001 678085d8 67e35000
<4>[ 2.050000] 678085c0 604e38b0 678085c0 67c87cf0
<4>[ 2.050000] 604e3f02 00400000 678085c0 00000200
<6>[ 2.050000] Call Trace:
<6>[ 2.050000] [<604e65b1>] ? check_partition+0x181/0x2c0
<6>[ 2.050000] [<604e38b0>] ? add_partition+0x0/0x5b0
<6>[ 2.050000] [<604e3f02>] rescan_partitions+0xa2/0x470
<6>[ 2.050000] [<604e1bd0>] ? get_gendisk+0x0/0x110
<6>[ 2.050000] [<60228842>] __blkdev_get+0x262/0x610
<6>[ 2.050000] [<601fe3cb>] ? unlock_new_inode+0x8b/0x90
<6>[ 2.050000] [<6027dba0>] ? sysfs_create_link+0x0/0x50
<6>[ 2.050000] [<6022907d>] blkdev_get+0x48d/0x5d0
<6>[ 2.050000] [<60513758>] ? refcount_dec_and_test+0x18/0x20
<6>[ 2.050000] [<604f6d1d>] ? kobject_put+0x5d/0x260
<6>[ 2.050000] [<6027dba0>] ? sysfs_create_link+0x0/0x50
<6>[ 2.050000] [<604e22a1>] device_add_disk+0x3b1/0x610
<6>[ 2.050000] [<604e2925>] ? alloc_disk+0x15/0x20
<6>[ 2.050000] [<604e1ef0>] ? device_add_disk+0x0/0x610
<6>[ 2.050000] [<60030d82>] 0x60030d82
<6>[ 2.050000] [<60030cb0>] ? 0x60030cb0
<6>[ 2.050000] [<60041520>] ? do_one_initcall+0x0/0x1a0
<6>[ 2.050000] [<600415da>] do_one_initcall+0xba/0x1a0
<6>[ 2.050000] [<600928f2>] ? parse_args+0x402/0x6f0
<6>[ 2.050000] [<60041520>] ? do_one_initcall+0x0/0x1a0
<6>[ 2.050000] [<60001fe3>] 0x60001fe3
<6>[ 2.050000] [<6015c8e6>] ? printk+0x0/0x94
<6>[ 2.050000] [<60835037>] kernel_init+0x27/0x150
<6>[ 2.050000] [<60043221>] new_thread_handler+0x81/0xb0
<6>[ 2.050000]
Abgebrochen (Speicherabzug geschrieben)
Post by Richard Weinberger
Also please test segv1.c, it tests whether WSL allows us to handle
page faults in userspace.
SIGSEGV at 0xdeadbeef, fixing up
x=3, &x=0xdeadbeef
IOW we write to 0xdeadbeef, catch the fault and fix it.
I get this:
***@DESKTOP-DQBDJ0U:/mnt/c/Users/thomas/VmShare$ ./segtest
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
[...]
Post by Richard Weinberger
Thanks,
//richard
Richard Weinberger
2017-05-09 17:48:32 UTC
Permalink
Thomas,
Post by Thomas Meyer
Post by Richard Weinberger
IOW we write to 0xdeadbeef, catch the fault and fix it.
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
SIGSEGV at 0x0, fixing up
Meh, that's a show-stopper.
WSL does not provide a valid signal machine context...

Thanks,
//richard

Loading...