commit-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[gnumach] 01/06: New upstream version 1.8+git20170102


From: Samuel Thibault
Subject: [gnumach] 01/06: New upstream version 1.8+git20170102
Date: Mon, 02 Jan 2017 14:29:14 +0000

This is an automated email from the git hooks/post-receive script.

sthibault pushed a commit to branch master
in repository gnumach.

commit 3cd7e0968dbd876aae9b17deb353ddc73c50a775
Author: Samuel Thibault <address@hidden>
Date:   Mon Jan 2 12:25:20 2017 +0000

    New upstream version 1.8+git20170102
---
 ChangeLog                 | 166 ++++++++++++
 Makefile.in               |   1 +
 Makefrag.am               |   1 +
 NEWS                      |  27 +-
 configure                 |  20 +-
 ddb/db_ext_symtab.c       |   3 +-
 doc/mach.info             | 240 +++++++++---------
 doc/mach.info-1           |  54 +++-
 doc/mach.info-2           |  11 +-
 doc/mach.texi             |  47 +++-
 doc/stamp-vti             |   8 +-
 doc/version.texi          |   8 +-
 i386/i386/user_ldt.c      |   2 +-
 i386/i386/vm_param.h      |  11 +-
 i386/intel/pmap.c         |   6 +-
 include/mach/gnumach.defs |  15 ++
 include/mach/mach_types.h |   1 +
 include/mach/vm_wire.h    |  30 +++
 ipc/ipc_kmsg.c            |   2 -
 ipc/mach_port.c           |  12 +-
 kern/rbtree.h             |   2 +-
 version.m4                |   2 +-
 vm/vm_debug.c             |   4 +-
 vm/vm_map.c               | 627 +++++++++++++++++++++++++++-------------------
 vm/vm_map.h               |  22 +-
 vm/vm_page.c              |  28 ++-
 vm/vm_pageout.c           |   6 +-
 vm/vm_user.c              |  40 ++-
 28 files changed, 929 insertions(+), 467 deletions(-)

diff --git a/ChangeLog b/ChangeLog
index f8397b0..ebe0116 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,169 @@
+2016-12-27  Richard Braun  <address@hidden>
+
+       VM: really fix pageout of external objects backed by the default pager
+       Commit eb07428ffb0009085fcd01dd1b79d9953af8e0ad does fix pageout of
+       external objects backed by the default pager, but the way it's done
+       has a vicious side effect: because they're considered external, the
+       pageout daemon can keep evicting them even though the external pagers
+       haven't released them, unlike internal pages which must all be
+       released before the pageout daemon can make progress. This can lead
+       to a situation where too many pages become wired, the default pager
+       cannot allocate memory to process new requests, and the pageout
+       daemon cannot recycle any more page, causing a panic.
+
+       This change makes the pageout daemon use the same strategy for both
+       internal pages and external pages sent to the default pager: use
+       the laundry bit and wait for all laundry pages to be released,
+       thereby completely synchronizing the pageout daemon and the default
+       pager.
+
+       * vm/vm_page.c (vm_page_can_move): Allow external laundry pages to
+       be moved.
+       (vm_page_seg_evict): Don't alter the `external_laundry' bit, merely
+       disable double paging for external pages sent to the default pager.
+       * vm/vm_pageout.c: Include vm/memory_object.h.
+       (vm_pageout_setup): Don't check whether the `external_laundry' bit
+       is set, but handle external pages sent to the default pager the same
+       as internal pages.
+
+2016-12-25  Richard Braun  <address@hidden>
+
+       Increase the size of the kernel map
+       Sometimes, in particular during IO spikes, the slab allocator needs
+       more virtual memory than is currently available. The new size should
+       also be fine for the Xen version.
+
+       * i386/i386/vm_param.h (VM_KERNEL_MAP_SIZE): Increase value.
+
+2016-12-24  Richard Braun  <address@hidden>
+
+       doc: update documentation about wiring
+       * doc/mach.texi: Describe vm_wire_all, and add more information
+       about vm_wire and vm_protect.
+
+2016-12-24  Richard Braun  <address@hidden>
+
+       VM: add the vm_wire_all call
+       This call maps the POSIX mlockall and munlockall calls.
+
+       * Makefrag.am (include_mach_HEADERS): Add include/mach/vm_wire.h.
+       * include/mach/gnumach.defs (vm_wire_t): New type.
+       (vm_wire_all): New routine.
+       * include/mach/mach_types.h: Include mach/vm_wire.h.
+       * vm/vm_map.c: Likewise.
+       (vm_map_enter): Automatically wire new entries if requested.
+       (vm_map_copyout): Likewise.
+       (vm_map_pageable_all): New function.
+       vm/vm_map.h: Include mach/vm_wire.h.
+       (struct vm_map): Update description of member `wiring_required'.
+       (vm_map_pageable_all): New function.
+       * vm/vm_user.c (vm_wire_all): New function.
+
+2016-12-24  Richard Braun  <address@hidden>
+
+       VM: rework map entry wiring
+       First, user wiring is removed, simply because it has never been used.
+
+       Second, make the VM system track wiring requests to better handle
+       protection. This change makes it possible to wire entries with
+       VM_PROT_NONE protection without actually reserving any page for
+       them until protection changes, and even make those pages pageable
+       if protection is downgraded to VM_PROT_NONE.
+
+       * ddb/db_ext_symtab.c: Update call to vm_map_pageable.
+       * i386/i386/user_ldt.c: Likewise.
+       * ipc/mach_port.c: Likewise.
+       * vm/vm_debug.c (mach_vm_region_info): Update values returned
+       as appropriate.
+       * vm/vm_map.c (vm_map_entry_copy): Update operation as appropriate.
+       (vm_map_setup): Update member names as appropriate.
+       (vm_map_find_entry): Update to account for map member variable changes.
+       (vm_map_enter): Likewise.
+       (vm_map_entry_inc_wired): New function.
+       (vm_map_entry_reset_wired): Likewise.
+       (vm_map_pageable_scan): Likewise.
+       (vm_map_protect): Update wired access, call vm_map_pageable_scan.
+       (vm_map_pageable_common): Rename to ...
+       (vm_map_pageable): ... and rewrite to use vm_map_pageable_scan.
+       (vm_map_entry_delete): Fix unwiring.
+       (vm_map_copy_overwrite): Replace inline code with a call to
+       vm_map_entry_reset_wired.
+       (vm_map_copyin_page_list): Likewise.
+       (vm_map_print): Likewise. Also print map size and wired size.
+       (vm_map_copyout_page_list): Update to account for map member variable
+       changes.
+       * vm/vm_map.h (struct vm_map_entry): Remove `user_wired_count' member,
+       add `wired_access' member.
+       (struct vm_map): Rename `user_wired' member to `size_wired'.
+       (vm_map_pageable_common): Remove function.
+       (vm_map_pageable_user): Remove macro.
+       (vm_map_pageable): Replace macro with function declaration.
+       * vm/vm_user.c (vm_wire): Update call to vm_map_pageable.
+
+2016-12-24  Richard Braun  <address@hidden>
+
+       VM: fix pageout of external objects backed by the default pager
+       Double paging on such objects causes deadlocks.
+
+       * vm/vm_page.c: Include <vm/memory_object.h>.
+       (vm_page_seg_evict): Rename laundry to double_paging to increase
+       clarity. Set the `external_laundry' bit when evicting a page
+       from an external object backed by the default pager.
+       * vm/vm_pageout.c (vm_pageout_setup): Wire page if the
+       `external_laundry' bit is set.
+
+2016-12-24  Richard Braun  <address@hidden>
+
+       VM: fix pageability check
+       Unlike laundry pages sent to the default pager, pages marked with the
+       `external_laundry' bit remain in the page queues and must be filtered
+       out by the pageability check.
+
+       * vm/vm_page.c (vm_page_can_move): Check the `external_laundry' bit.
+
+2016-12-24  Richard Braun  <address@hidden>
+
+       VM: fix mapping removal on wired pages
+       Memory wiring is about to be reworked, at which point the VM system
+       will properly track wired mappings. Removing them when changing
+       protection makes sense, and is fine as long as the VM system
+       rewires them when access is restored.
+
+       * i386/intel/pmap.c (pmap_page_protect): Decrease wiring count instead
+       of causing a panic when removing a wired mapping.
+
+2016-12-21  Richard Braun  <address@hidden>
+
+       VM: fix pageout timeout
+       The interval parameter to the thread_set_timeout function is actually
+       in ticks.
+
+       * vm/vm_pageout.c (vm_pageout): Fix call to thread_set_timeout.
+
+2016-12-18  Thomas Schwinge  <address@hidden>
+
+       GNU Mach 1.8
+       * version.m4 (AC_PACKAGE_VERSION): Set to 1.8.
+       * NEWS: Finalize for 1.8.
+
+2016-12-11  Richard Braun  <address@hidden>
+
+       VM: make vm_wire more POSIX-friendly
+       * doc/mach.texi: Update return codes.
+       * vm/vm_map.c (vm_map_pageable_common): Return KERN_NO_SPACE instead
+       of KERN_FAILURE if some of the specified address range does not
+       correspond to mapped pages. Skip unwired entries instead of failing
+       when unwiring.
+
+2016-12-09  Justus Winter  <address@hidden>
+
+       Update the NEWS file
+
+2016-12-09  Richard Braun  <address@hidden>
+
+       rbtree: minor change
+       * kern/rbtree.h (rbtree_for_each_remove): Remove trailing slash.
+
 2032-05-12  Richard Braun  <address@hidden>
 
        VM: fix pageout throttling to external pagers
diff --git a/Makefile.in b/Makefile.in
index b7caa10..f12bdab 100644
--- a/Makefile.in
+++ b/Makefile.in
@@ -2748,6 +2748,7 @@ include_mach_HEADERS = \
        include/mach/vm_param.h \
        include/mach/vm_prot.h \
        include/mach/vm_statistics.h \
+       include/mach/vm_wire.h \
        include/mach/inline.h \
        include/mach/xen.h
 
diff --git a/Makefrag.am b/Makefrag.am
index e001d65..c16f1c7 100644
--- a/Makefrag.am
+++ b/Makefrag.am
@@ -418,6 +418,7 @@ include_mach_HEADERS = \
        include/mach/vm_param.h \
        include/mach/vm_prot.h \
        include/mach/vm_statistics.h \
+       include/mach/vm_wire.h \
        include/mach/inline.h \
        include/mach/xen.h
 
diff --git a/NEWS b/NEWS
index a14ac2b..8349550 100644
--- a/NEWS
+++ b/NEWS
@@ -1,6 +1,29 @@
-Version 1.8 (2016-10-XX)
+Version 1.8 (2016-12-18)
+
+The memory management system was extensively reworked.  A new type for
+physical addresses is now used where appropriate, and the system can
+make use of the high memory segment.  Many paging issues have been
+addressed, and as a result the system handles low memory situations
+more gracefully now.
+
+The virtual memory system now uses a red-black tree for allocations,
+and as a result it now supports tasks with tens of thousands of
+mappings.
+
+Debugging and error reporting has been improved.  Among other things
+the VM maps are now augmented with names that are used in error
+messages, panics and assertions point to their locations, the lock
+debugging mechanism has been fixed, and the kernel debugger can now
+inspect stack traces reaching into the machine-dependent bits
+implemented in assembler.
+
+As usual, bugs have been fixed throughout the code, including minor
+issues with the gsync synchronization mechanism which is now used for
+the internal locks in the GNU C Library (glibc).
 
 The deprecated external memory management interface has been removed.
+
+The partial ACPI support has been removed.
 
 Version 1.7 (2016-05-18)
 
@@ -63,7 +86,7 @@ The kernel debugger can now parse ELF symbol tables, can be 
invoked
 over serial lines, gained two new commands and has received usability
 improvements.
 
-The vm pageout policy has been tuned to accommodate modern hardware.
+The VM pageout policy has been tuned to accommodate modern hardware.
 
 The kernel gained partial ACPI support on x86, enough to power down
 the system.
diff --git a/configure b/configure
index e7a4219..7d869e4 100755
--- a/configure
+++ b/configure
@@ -1,6 +1,6 @@
 #! /bin/sh
 # Guess values for system-dependent variables and create Makefiles.
-# Generated by GNU Autoconf 2.69 for GNU Mach 1.7+git20161202.
+# Generated by GNU Autoconf 2.69 for GNU Mach 1.8+git20170102.
 #
 # Report bugs to <address@hidden>.
 #
@@ -579,8 +579,8 @@ MAKEFLAGS=
 # Identity of this package.
 PACKAGE_NAME='GNU Mach'
 PACKAGE_TARNAME='gnumach'
-PACKAGE_VERSION='1.7+git20161202'
-PACKAGE_STRING='GNU Mach 1.7+git20161202'
+PACKAGE_VERSION='1.8+git20170102'
+PACKAGE_STRING='GNU Mach 1.8+git20170102'
 PACKAGE_BUGREPORT='address@hidden'
 PACKAGE_URL=''
 
@@ -1599,7 +1599,7 @@ if test "$ac_init_help" = "long"; then
   # Omit some internal or obsolete options to make the list less imposing.
   # This message is too long to be a string in the A/UX 3.1 sh.
   cat <<_ACEOF
-\`configure' configures GNU Mach 1.7+git20161202 to adapt to many kinds of 
systems.
+\`configure' configures GNU Mach 1.8+git20170102 to adapt to many kinds of 
systems.
 
 Usage: $0 [OPTION]... [VAR=VALUE]...
 
@@ -1670,7 +1670,7 @@ fi
 
 if test -n "$ac_init_help"; then
   case $ac_init_help in
-     short | recursive ) echo "Configuration of GNU Mach 1.7+git20161202:";;
+     short | recursive ) echo "Configuration of GNU Mach 1.8+git20170102:";;
    esac
   cat <<\_ACEOF
 
@@ -2026,7 +2026,7 @@ fi
 test -n "$ac_init_help" && exit $ac_status
 if $ac_init_version; then
   cat <<\_ACEOF
-GNU Mach configure 1.7+git20161202
+GNU Mach configure 1.8+git20170102
 generated by GNU Autoconf 2.69
 
 Copyright (C) 2012 Free Software Foundation, Inc.
@@ -2118,7 +2118,7 @@ cat >config.log <<_ACEOF
 This file contains any messages produced by compilers while
 running configure, to aid debugging if configure makes a mistake.
 
-It was created by GNU Mach $as_me 1.7+git20161202, which was
+It was created by GNU Mach $as_me 1.8+git20170102, which was
 generated by GNU Autoconf 2.69.  Invocation command line was
 
   $ $0 $@
@@ -2984,7 +2984,7 @@ fi
 
 # Define the identity of the package.
  PACKAGE='gnumach'
- VERSION='1.7+git20161202'
+ VERSION='1.8+git20170102'
 
 
 # Some tools Automake needs.
@@ -12189,7 +12189,7 @@ cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
 # report actual input values of CONFIG_FILES etc. instead of their
 # values after options handling.
 ac_log="
-This file was extended by GNU Mach $as_me 1.7+git20161202, which was
+This file was extended by GNU Mach $as_me 1.8+git20170102, which was
 generated by GNU Autoconf 2.69.  Invocation command line was
 
   CONFIG_FILES    = $CONFIG_FILES
@@ -12260,7 +12260,7 @@ _ACEOF
 cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
 ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; 
s/[\\""\`\$]/\\\\&/g'`"
 ac_cs_version="\\
-GNU Mach config.status 1.7+git20161202
+GNU Mach config.status 1.8+git20170102
 configured by $0, generated by GNU Autoconf 2.69,
   with options \\"\$ac_cs_config\\"
 
diff --git a/ddb/db_ext_symtab.c b/ddb/db_ext_symtab.c
index cafb0c4..e1bdfd8 100644
--- a/ddb/db_ext_symtab.c
+++ b/ddb/db_ext_symtab.c
@@ -106,7 +106,8 @@ host_load_symbol_table(
        (void) vm_map_pageable(kernel_map,
                symtab_start,
                round_page(symtab_end),
-               VM_PROT_READ|VM_PROT_WRITE);
+               VM_PROT_READ|VM_PROT_WRITE,
+               TRUE, TRUE);
 
        /*
         * Discard the original copy object
diff --git a/doc/mach.info b/doc/mach.info
index cf13082..0428c7e 100644
--- a/doc/mach.info
+++ b/doc/mach.info
@@ -2,8 +2,8 @@ This is mach.info, produced by makeinfo version 6.3 from 
mach.texi.
 
 This file documents the GNU Mach microkernel.
 
-   This is edition 0.4, last updated on 16 October 2016, of 'The GNU
-Mach Reference Manual', for version 1.7+git20161202.
+   This is edition 0.4, last updated on 2 January 2017, of 'The GNU Mach
+Reference Manual', for version 1.8+git20170102.
 
    Copyright (C) 2001, 2002, 2006, 2007, 2008 Free Software Foundation,
 Inc.
@@ -39,126 +39,126 @@ END-INFO-DIR-ENTRY
 
 
 Indirect:
-mach.info-1: 1641
-mach.info-2: 302874
+mach.info-1: 1640
+mach.info-2: 304532
 
 Tag Table:
 (Indirect)
-Node: Top1641
-Node: Introduction11280
-Node: Audience12111
-Node: Features13146
-Node: Overview14973
-Node: History16166
-Node: Installing16311
-Node: Binary Distributions17536
-Node: Compilation18344
-Node: Configuration19577
-Node: Cross-Compilation35988
-Node: Bootstrap36769
-Ref: Bootstrap-Footnote-137212
-Node: Bootloader37449
-Ref: Bootloader-Footnote-138729
-Node: Modules38815
-Node: Inter Process Communication39642
-Node: Major Concepts40265
-Node: Messaging Interface44070
-Node: Mach Message Call44800
-Node: Message Format48115
-Node: Exchanging Port Rights59307
-Ref: Exchanging Port Rights-Footnote-164869
-Node: Memory65041
-Ref: Memory-Footnote-168135
-Node: Message Send68477
-Ref: Message Send-Footnote-175499
-Node: Message Receive75782
-Ref: Message Receive-Footnote-185434
-Node: Atomicity85715
-Node: Port Manipulation Interface88489
-Node: Port Creation90044
-Node: Port Destruction94833
-Node: Port Names97976
-Node: Port Rights102223
-Node: Ports and other Tasks106012
-Node: Receive Rights110105
-Node: Port Sets117036
-Node: Request Notifications119439
-Node: Inherited Ports124243
-Node: Virtual Memory Interface127927
-Node: Memory Allocation129180
-Node: Memory Deallocation131705
-Node: Data Transfer133169
-Node: Memory Attributes136695
-Node: Mapping Memory Objects146134
-Node: Memory Statistics149426
-Node: External Memory Management151000
-Node: Memory Object Server151705
-Node: Memory Object Creation154386
-Node: Memory Object Termination158374
-Node: Memory Objects and Data161313
-Node: Memory Object Locking175218
-Node: Memory Object Attributes181082
-Node: Default Memory Manager184857
-Node: Threads and Tasks190539
-Node: Thread Interface190876
-Node: Thread Creation191872
-Node: Thread Termination192989
-Node: Thread Information193460
-Node: Thread Settings199559
-Node: Thread Execution200793
-Node: Scheduling208086
-Node: Thread Priority208441
-Node: Hand-Off Scheduling211075
-Node: Scheduling Policy216200
-Node: Thread Special Ports217532
-Node: Exceptions219978
-Node: Task Interface220848
-Node: Task Creation221860
-Node: Task Termination223195
-Node: Task Information223797
-Node: Task Execution230699
-Node: Task Special Ports235112
-Node: Syscall Emulation238966
-Node: Profiling240197
-Node: Host Interface243960
-Node: Host Ports244945
-Node: Host Information247018
-Node: Host Time252401
-Node: Host Reboot255068
-Node: Processors and Processor Sets255620
-Node: Processor Set Interface256598
-Node: Processor Set Ports257365
-Node: Processor Set Access258195
-Node: Processor Set Creation260455
-Node: Processor Set Destruction261482
-Node: Tasks and Threads on Sets262403
-Node: Processor Set Priority267570
-Node: Processor Set Policy268860
-Node: Processor Set Info270474
-Node: Processor Interface274287
-Node: Hosted Processors275012
-Node: Processor Control276003
-Node: Processors and Sets277469
-Node: Processor Info279347
-Node: Device Interface282089
-Node: Device Reply Server283704
-Node: Device Open284996
-Node: Device Close287119
-Node: Device Read287698
-Node: Device Write290617
-Node: Device Map293422
-Node: Device Status294313
-Node: Device Filter295486
-Node: Kernel Debugger302874
-Node: Operation303601
-Node: Commands306578
-Node: Variables320363
-Node: Expressions321751
-Node: Copying323100
-Node: Documentation License342329
-Node: GNU Free Documentation License342918
-Node: CMU License365317
-Node: Concept Index366552
-Node: Function and Data Index370398
+Node: Top1640
+Node: Introduction11278
+Node: Audience12109
+Node: Features13144
+Node: Overview14971
+Node: History16164
+Node: Installing16309
+Node: Binary Distributions17534
+Node: Compilation18342
+Node: Configuration19575
+Node: Cross-Compilation35986
+Node: Bootstrap36767
+Ref: Bootstrap-Footnote-137210
+Node: Bootloader37447
+Ref: Bootloader-Footnote-138727
+Node: Modules38813
+Node: Inter Process Communication39640
+Node: Major Concepts40263
+Node: Messaging Interface44068
+Node: Mach Message Call44798
+Node: Message Format48113
+Node: Exchanging Port Rights59305
+Ref: Exchanging Port Rights-Footnote-164867
+Node: Memory65039
+Ref: Memory-Footnote-168133
+Node: Message Send68475
+Ref: Message Send-Footnote-175497
+Node: Message Receive75780
+Ref: Message Receive-Footnote-185432
+Node: Atomicity85713
+Node: Port Manipulation Interface88487
+Node: Port Creation90042
+Node: Port Destruction94831
+Node: Port Names97974
+Node: Port Rights102221
+Node: Ports and other Tasks106010
+Node: Receive Rights110103
+Node: Port Sets117034
+Node: Request Notifications119437
+Node: Inherited Ports124241
+Node: Virtual Memory Interface127925
+Node: Memory Allocation129178
+Node: Memory Deallocation131703
+Node: Data Transfer133167
+Node: Memory Attributes136693
+Node: Mapping Memory Objects147793
+Node: Memory Statistics151085
+Node: External Memory Management152659
+Node: Memory Object Server153364
+Node: Memory Object Creation156045
+Node: Memory Object Termination160033
+Node: Memory Objects and Data162972
+Node: Memory Object Locking176877
+Node: Memory Object Attributes182741
+Node: Default Memory Manager186516
+Node: Threads and Tasks192198
+Node: Thread Interface192535
+Node: Thread Creation193531
+Node: Thread Termination194648
+Node: Thread Information195119
+Node: Thread Settings201218
+Node: Thread Execution202452
+Node: Scheduling209745
+Node: Thread Priority210100
+Node: Hand-Off Scheduling212734
+Node: Scheduling Policy217859
+Node: Thread Special Ports219191
+Node: Exceptions221637
+Node: Task Interface222507
+Node: Task Creation223519
+Node: Task Termination224854
+Node: Task Information225456
+Node: Task Execution232358
+Node: Task Special Ports236771
+Node: Syscall Emulation240625
+Node: Profiling241856
+Node: Host Interface245619
+Node: Host Ports246604
+Node: Host Information248677
+Node: Host Time254060
+Node: Host Reboot256727
+Node: Processors and Processor Sets257279
+Node: Processor Set Interface258257
+Node: Processor Set Ports259024
+Node: Processor Set Access259854
+Node: Processor Set Creation262114
+Node: Processor Set Destruction263141
+Node: Tasks and Threads on Sets264062
+Node: Processor Set Priority269229
+Node: Processor Set Policy270519
+Node: Processor Set Info272133
+Node: Processor Interface275946
+Node: Hosted Processors276671
+Node: Processor Control277662
+Node: Processors and Sets279128
+Node: Processor Info281006
+Node: Device Interface283748
+Node: Device Reply Server285363
+Node: Device Open286655
+Node: Device Close288778
+Node: Device Read289357
+Node: Device Write292276
+Node: Device Map295081
+Node: Device Status295972
+Node: Device Filter297145
+Node: Kernel Debugger304532
+Node: Operation305259
+Node: Commands308236
+Node: Variables322021
+Node: Expressions323409
+Node: Copying324758
+Node: Documentation License343987
+Node: GNU Free Documentation License344576
+Node: CMU License366975
+Node: Concept Index368210
+Node: Function and Data Index372056
 
 End Tag Table
diff --git a/doc/mach.info-1 b/doc/mach.info-1
index 126d593..449b1a1 100644
--- a/doc/mach.info-1
+++ b/doc/mach.info-1
@@ -2,8 +2,8 @@ This is mach.info, produced by makeinfo version 6.3 from 
mach.texi.
 
 This file documents the GNU Mach microkernel.
 
-   This is edition 0.4, last updated on 16 October 2016, of 'The GNU
-Mach Reference Manual', for version 1.7+git20161202.
+   This is edition 0.4, last updated on 2 January 2017, of 'The GNU Mach
+Reference Manual', for version 1.8+git20170102.
 
    Copyright (C) 2001, 2002, 2006, 2007, 2008 Free Software Foundation,
 Inc.
@@ -45,8 +45,8 @@ Main Menu
 
 This file documents the GNU Mach microkernel.
 
-   This is edition 0.4, last updated on 16 October 2016, of 'The GNU
-Mach Reference Manual', for version 1.7+git20161202.
+   This is edition 0.4, last updated on 2 January 2017, of 'The GNU Mach
+Reference Manual', for version 1.8+git20170102.
 
    Copyright (C) 2001, 2002, 2006, 2007, 2008 Free Software Foundation,
 Inc.
@@ -3180,6 +3180,9 @@ File: mach.info,  Node: Memory Attributes,  Next: Mapping 
Memory Objects,  Prev:
      'VM_PROT_WRITE' permission and execute access to require
      'VM_PROT_READ' permission.
 
+     If a region is wired, changing its protection also updates the
+     access types for which no page faults must occur.
+
      The function returns 'KERN_SUCCESS' if the memory was successfully
      protected, 'KERN_INVALID_ADDRESS' if an invalid or non-allocated
      address was specified and 'KERN_PROTECTION_FAILURE' if an attempt
@@ -3233,14 +3236,19 @@ File: mach.info,  Node: Memory Attributes,  Next: 
Mapping Memory Objects,  Prev:
      with a access argument of 'VM_PROT_READ | VM_PROT_WRITE'.  A
      special case is that 'VM_PROT_NONE' makes the memory pageable.
 
+     Wiring doesn't stack, i.e.  a single call to 'vm_wire' with ACCESS
+     'VM_PROT_NONE' unwires the specified range, regardless of how many
+     times it was previously wired.  Conversely, a single call to
+     'vm_wire' with ACCESS 'VM_PROT_READ | VM_PROT_WRITE' wires the
+     specified range, regardless of how many times it was previously
+     unwired.
+
      The function returns 'KERN_SUCCESS' if the call succeeded,
      'KERN_INVALID_HOST' if HOST was not a valid host port,
      'KERN_INVALID_TASK' if TASK was not a valid task,
      'KERN_INVALID_VALUE' if ACCESS specified an invalid access mode,
-     'KERN_FAILURE' if some memory in the specified range is not present
-     or has an inappropriate protection value, and
-     'KERN_INVALID_ARGUMENT' if unwiring (ACCESS is 'VM_PROT_NONE') and
-     the memory is not already wired.
+     and 'KERN_NO_SPACE' if some memory in the specified range is not
+     present or has an inappropriate protection value.
 
      The 'vm_wire' call is actually an RPC to HOST, normally a send
      right for a privileged host port, but potentially any send right.
@@ -3248,6 +3256,36 @@ File: mach.info,  Node: Memory Attributes,  Next: 
Mapping Memory Objects,  Prev:
      server (normally the kernel), the call may return 'mach_msg' return
      codes.
 
+ -- Function: kern_return_t vm_wire_all (host_t HOST,
+          vm_task_t TARGET_TASK, vm_wire_t FLAGS)
+     The function 'vm_wire_all' allows applications to control memory
+     pageability, as with 'vm_wire', but applies to all current and/or
+     future mappings.
+
+     The argument FLAGS are bit values, combined with bitwise-or.
+
+     'VM_WIRE_CURRENT'
+          All currently existing entries are wired, with access types
+          matching their protection.
+
+     'VM_WIRE_FUTURE'
+          All future entries are automatically wired, with access types
+          matching their protection.
+
+     If flags specifies no bits ('VM_WIRE_NONE'), all current entries
+     are unwired, and future entries are no longer automatically wired.
+
+     The function returns 'KERN_SUCCESS' if the call succeeded,
+     'KERN_INVALID_HOST' if HOST was not a valid host port,
+     'KERN_INVALID_TASK' if TASK was not a valid task, and
+     'KERN_INVALID_VALUE' if FLAGS specifies invalid bits.
+
+     The 'vm_wire_all' call is actually an RPC to HOST, normally a send
+     right for a privileged host port, but potentially any send right.
+     In addition to the normal diagnostic return codes from the call's
+     server (normally the kernel), the call may return 'mach_msg' return
+     codes.
+
  -- Function: kern_return_t vm_machine_attribute (vm_task_t TASK,
           vm_address_t ADDRESS, vm_size_t SIZE, vm_prot_t ACCESS,
           vm_machine_attribute_t ATTRIBUTE,
diff --git a/doc/mach.info-2 b/doc/mach.info-2
index 6163fd8..c253c14 100644
--- a/doc/mach.info-2
+++ b/doc/mach.info-2
@@ -2,8 +2,8 @@ This is mach.info, produced by makeinfo version 6.3 from 
mach.texi.
 
 This file documents the GNU Mach microkernel.
 
-   This is edition 0.4, last updated on 16 October 2016, of 'The GNU
-Mach Reference Manual', for version 1.7+git20161202.
+   This is edition 0.4, last updated on 2 January 2017, of 'The GNU Mach
+Reference Manual', for version 1.8+git20170102.
 
    Copyright (C) 2001, 2002, 2006, 2007, 2008 Free Software Foundation,
 Inc.
@@ -1770,8 +1770,8 @@ Function and Data Index
 * vm_allocate:                           Memory Allocation.   (line   6)
 * vm_copy:                               Data Transfer.       (line  50)
 * vm_deallocate:                         Memory Deallocation. (line   6)
-* vm_inherit:                            Memory Attributes.   (line  68)
-* vm_machine_attribute:                  Memory Attributes.   (line 132)
+* vm_inherit:                            Memory Attributes.   (line  71)
+* vm_machine_attribute:                  Memory Attributes.   (line 171)
 * vm_map:                                Mapping Memory Objects.
                                                               (line   6)
 * vm_protect:                            Memory Attributes.   (line  34)
@@ -1783,6 +1783,7 @@ Function and Data Index
 * vm_statistics_data_t:                  Memory Statistics.   (line   6)
 * vm_task_t:                             Virtual Memory Interface.
                                                               (line   6)
-* vm_wire:                               Memory Attributes.   (line  99)
+* vm_wire:                               Memory Attributes.   (line 102)
+* vm_wire_all:                           Memory Attributes.   (line 140)
 * vm_write:                              Data Transfer.       (line  31)
 
diff --git a/doc/mach.texi b/doc/mach.texi
index 99ee854..756731e 100644
--- a/doc/mach.texi
+++ b/doc/mach.texi
@@ -3204,6 +3204,9 @@ interface allows write access to require 
@code{VM_PROT_READ} and
 @code{VM_PROT_WRITE} permission and execute access to require
 @code{VM_PROT_READ} permission.
 
+If a region is wired, changing its protection also updates the
+access types for which no page faults must occur.
+
 The function returns @code{KERN_SUCCESS} if the memory was successfully
 protected, @code{KERN_INVALID_ADDRESS} if an invalid or non-allocated
 address was specified and @code{KERN_PROTECTION_FAILURE} if an attempt
@@ -3257,14 +3260,19 @@ included in access.  Data memory can be made 
non-pageable (wired) with a
 access argument of @code{VM_PROT_READ | VM_PROT_WRITE}.  A special case
 is that @code{VM_PROT_NONE} makes the memory pageable.
 
+Wiring doesn't stack, i.e. a single call to @code{vm_wire} with
address@hidden @code{VM_PROT_NONE} unwires the specified range,
+regardless of how many times it was previously wired. Conversely,
+a single call to @code{vm_wire} with @var{access}
address@hidden | VM_PROT_WRITE} wires the specified range,
+regardless of how many times it was previously unwired.
+
 The function returns @code{KERN_SUCCESS} if the call succeeded,
 @code{KERN_INVALID_HOST} if @var{host} was not a valid host
 port, @code{KERN_INVALID_TASK} if @var{task} was not a valid task,
 @code{KERN_INVALID_VALUE} if @var{access} specified an invalid access
-mode, @code{KERN_FAILURE} if some memory in the specified range is not
-present or has an inappropriate protection value, and
address@hidden if unwiring (@var{access} is
address@hidden) and the memory is not already wired.
+mode, and @code{KERN_NO_SPACE} if some memory in the specified range
+is not present or has an inappropriate protection value.
 
 The @code{vm_wire} call is actually an RPC to @var{host}, normally
 a send right for a privileged host port, but potentially any send right.
@@ -3272,6 +3280,37 @@ In addition to the normal diagnostic return codes from 
the call's server
 (normally the kernel), the call may return @code{mach_msg} return codes.
 @end deftypefun
 
address@hidden kern_return_t vm_wire_all (@w{host_t @var{host}}, @w{vm_task_t 
@var{target_task}}, @w{vm_wire_t @var{flags}})
+The function @code{vm_wire_all} allows applications to control
+memory pageability, as with @code{vm_wire}, but applies to all
+current and/or future mappings.
+
+The argument @var{flags} are bit values, combined with bitwise-or.
+
address@hidden @code
address@hidden VM_WIRE_CURRENT
+All currently existing entries are wired, with access types matching
+their protection.
+
address@hidden VM_WIRE_FUTURE
+All future entries are automatically wired, with access types matching
+their protection.
address@hidden table
+
+If flags specifies no bits (@code{VM_WIRE_NONE}), all current entries
+are unwired, and future entries are no longer automatically wired.
+
+The function returns @code{KERN_SUCCESS} if the call succeeded,
address@hidden if @var{host} was not a valid host port,
address@hidden if @var{task} was not a valid task,
+and @code{KERN_INVALID_VALUE} if @var{flags} specifies invalid bits.
+
+The @code{vm_wire_all} call is actually an RPC to @var{host}, normally
+a send right for a privileged host port, but potentially any send right.
+In addition to the normal diagnostic return codes from the call's server
+(normally the kernel), the call may return @code{mach_msg} return codes.
address@hidden deftypefun
+
 @deftypefun kern_return_t vm_machine_attribute (@w{vm_task_t @var{task}}, 
@w{vm_address_t @var{address}}, @w{vm_size_t @var{size}}, @w{vm_prot_t 
@var{access}}, @w{vm_machine_attribute_t @var{attribute}}, 
@w{vm_machine_attribute_val_t @var{value}})
 The function @code{vm_machine_attribute} specifies machine-specific
 attributes for a VM mapping, such as cachability, migrability,
diff --git a/doc/stamp-vti b/doc/stamp-vti
index 1d1d8d7..9278519 100644
--- a/doc/stamp-vti
+++ b/doc/stamp-vti
@@ -1,4 +1,4 @@
address@hidden UPDATED 16 October 2016
address@hidden UPDATED-MONTH October 2016
address@hidden EDITION 1.7+git20161202
address@hidden VERSION 1.7+git20161202
address@hidden UPDATED 2 January 2017
address@hidden UPDATED-MONTH January 2017
address@hidden EDITION 1.8+git20170102
address@hidden VERSION 1.8+git20170102
diff --git a/doc/version.texi b/doc/version.texi
index 1d1d8d7..9278519 100644
--- a/doc/version.texi
+++ b/doc/version.texi
@@ -1,4 +1,4 @@
address@hidden UPDATED 16 October 2016
address@hidden UPDATED-MONTH October 2016
address@hidden EDITION 1.7+git20161202
address@hidden VERSION 1.7+git20161202
address@hidden UPDATED 2 January 2017
address@hidden UPDATED-MONTH January 2017
address@hidden EDITION 1.8+git20170102
address@hidden VERSION 1.8+git20170102
diff --git a/i386/i386/user_ldt.c b/i386/i386/user_ldt.c
index e7705b5..09500b4 100644
--- a/i386/i386/user_ldt.c
+++ b/i386/i386/user_ldt.c
@@ -96,7 +96,7 @@ i386_set_ldt(
            (void) vm_map_pageable(ipc_kernel_map,
                        dst_addr,
                        dst_addr + count * sizeof(struct real_descriptor),
-                       VM_PROT_READ|VM_PROT_WRITE);
+                       VM_PROT_READ|VM_PROT_WRITE, TRUE, TRUE);
            desc_list = (struct real_descriptor *)dst_addr;
        }
 
diff --git a/i386/i386/vm_param.h b/i386/i386/vm_param.h
index 2635c2c..7051b7a 100644
--- a/i386/i386/vm_param.h
+++ b/i386/i386/vm_param.h
@@ -56,12 +56,11 @@
 #define VM_MAX_KERNEL_ADDRESS  (LINEAR_MAX_KERNEL_ADDRESS - 
LINEAR_MIN_KERNEL_ADDRESS + VM_MIN_KERNEL_ADDRESS)
 #endif /* MACH_PV_PAGETABLES */
 
-/* Reserve mapping room for kmem. */
-#ifdef MACH_XEN
-#define VM_KERNEL_MAP_SIZE (128 * 1024 * 1024)
-#else
-#define VM_KERNEL_MAP_SIZE (96 * 1024 * 1024)
-#endif
+/*
+ * Reserve mapping room for the kernel map, which includes
+ * the device I/O map and the IPC map.
+ */
+#define VM_KERNEL_MAP_SIZE (152 * 1024 * 1024)
 
 /* The kernel virtual address space is actually located
    at high linear addresses.
diff --git a/i386/intel/pmap.c b/i386/intel/pmap.c
index b51aed9..505b206 100644
--- a/i386/intel/pmap.c
+++ b/i386/intel/pmap.c
@@ -1648,8 +1648,10 @@ void pmap_page_protect(
                    /*
                     * Remove the mapping, collecting any modify bits.
                     */
-                   if (*pte & INTEL_PTE_WIRED)
-                       panic("pmap_page_protect removing a wired page");
+
+                   if (*pte & INTEL_PTE_WIRED) {
+                       pmap->stats.wired_count--;
+                   }
 
                    {
                        int     i = ptes_per_vm_page;
diff --git a/include/mach/gnumach.defs b/include/mach/gnumach.defs
index 5235df6..b484acc 100644
--- a/include/mach/gnumach.defs
+++ b/include/mach/gnumach.defs
@@ -35,6 +35,8 @@ GNUMACH_IMPORTS
 
 type vm_cache_statistics_data_t = struct[11] of integer_t;
 
+type vm_wire_t = int;
+
 /*
  * Return page cache statistics for the host on which the target task
  * resides.
@@ -136,3 +138,16 @@ simpleroutine gsync_requeue(
   wake_one : boolean_t;
   flags : int);
 
+/*
+ * If the VM_WIRE_CURRENT flag is passed, specify that the entire
+ * virtual address space of the target task must not cause page faults.
+ *
+ * If the VM_WIRE_FUTURE flag is passed, automatically wire new
+ * mappings in the address space of the target task.
+ *
+ * If the flags are empty (VM_WIRE_NONE), unwire all mappings.
+ */
+routine        vm_wire_all(
+               host            : mach_port_t;
+               task            : vm_task_t;
+               flags           : vm_wire_t);
diff --git a/include/mach/mach_types.h b/include/mach/mach_types.h
index 8768482..65164a9 100644
--- a/include/mach/mach_types.h
+++ b/include/mach/mach_types.h
@@ -53,6 +53,7 @@
 #include <mach/vm_prot.h>
 #include <mach/vm_statistics.h>
 #include <mach/vm_cache_statistics.h>
+#include <mach/vm_wire.h>
 
 #ifdef MACH_KERNEL
 #include <kern/task.h>         /* for task_array_t */
diff --git a/include/mach/vm_wire.h b/include/mach/vm_wire.h
new file mode 100644
index 0000000..1552dfa
--- /dev/null
+++ b/include/mach/vm_wire.h
@@ -0,0 +1,30 @@
+/*
+ * Copyright (C) 2017 Free Software Foundation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+
+#ifndef _MACH_VM_WIRE_H_
+#define _MACH_VM_WIRE_H_
+
+typedef int vm_wire_t;
+
+#define VM_WIRE_NONE    0
+#define VM_WIRE_CURRENT 1
+#define VM_WIRE_FUTURE  2
+
+#define VM_WIRE_ALL     (VM_WIRE_CURRENT | VM_WIRE_FUTURE)
+
+#endif /* _MACH_VM_WIRE_H_ */
diff --git a/ipc/ipc_kmsg.c b/ipc/ipc_kmsg.c
index 527fbfc..28ed23c 100644
--- a/ipc/ipc_kmsg.c
+++ b/ipc/ipc_kmsg.c
@@ -1391,8 +1391,6 @@ ipc_kmsg_copyin_body(
                        if (length == 0)
                                data = 0;
                        else if (is_port) {
-                               if (length > 1<<20)
-                                       printf("allocating %llu for message 
%u\n", length, kmsg->ikm_header.msgh_id);
                                data = kalloc(length);
                                if (data == 0)
                                        goto invalid_memory;
diff --git a/ipc/mach_port.c b/ipc/mach_port.c
index 93a1248..5cc3998 100644
--- a/ipc/mach_port.c
+++ b/ipc/mach_port.c
@@ -216,11 +216,11 @@ mach_port_names(
                /* can't fault while we hold locks */
 
                kr = vm_map_pageable(ipc_kernel_map, addr1, addr1 + size,
-                                    VM_PROT_READ|VM_PROT_WRITE);
+                                    VM_PROT_READ|VM_PROT_WRITE, TRUE, TRUE);
                assert(kr == KERN_SUCCESS);
 
                kr = vm_map_pageable(ipc_kernel_map, addr2, addr2 + size,
-                                    VM_PROT_READ|VM_PROT_WRITE);
+                                    VM_PROT_READ|VM_PROT_WRITE, TRUE, TRUE);
                assert(kr == KERN_SUCCESS);
        }
        /* space is read-locked and active */
@@ -263,12 +263,12 @@ mach_port_names(
 
                kr = vm_map_pageable(ipc_kernel_map,
                                     addr1, addr1 + size_used,
-                                    VM_PROT_NONE);
+                                    VM_PROT_NONE, TRUE, TRUE);
                assert(kr == KERN_SUCCESS);
 
                kr = vm_map_pageable(ipc_kernel_map,
                                     addr2, addr2 + size_used,
-                                    VM_PROT_NONE);
+                                    VM_PROT_NONE, TRUE, TRUE);
                assert(kr == KERN_SUCCESS);
 
                kr = vm_map_copyin(ipc_kernel_map, addr1, size_used,
@@ -938,7 +938,7 @@ mach_port_get_set_status(
                /* can't fault while we hold locks */
 
                kr = vm_map_pageable(ipc_kernel_map, addr, addr + size,
-                                    VM_PROT_READ|VM_PROT_WRITE);
+                                    VM_PROT_READ|VM_PROT_WRITE, TRUE, TRUE);
                assert(kr == KERN_SUCCESS);
 
                kr = ipc_right_lookup_read(space, name, &entry);
@@ -1003,7 +1003,7 @@ mach_port_get_set_status(
 
                kr = vm_map_pageable(ipc_kernel_map,
                                     addr, addr + size_used,
-                                    VM_PROT_NONE);
+                                    VM_PROT_NONE, TRUE, TRUE);
                assert(kr == KERN_SUCCESS);
 
                kr = vm_map_copyin(ipc_kernel_map, addr, size_used,
diff --git a/kern/rbtree.h b/kern/rbtree.h
index 16ef273..f885fe7 100644
--- a/kern/rbtree.h
+++ b/kern/rbtree.h
@@ -301,6 +301,6 @@ void rbtree_remove(struct rbtree *tree, struct rbtree_node 
*node);
 for (node = rbtree_postwalk_deepest(tree),              \
      tmp = rbtree_postwalk_unlink(node);                \
      node != NULL;                                      \
-     node = tmp, tmp = rbtree_postwalk_unlink(node))    \
+     node = tmp, tmp = rbtree_postwalk_unlink(node))
 
 #endif /* _KERN_RBTREE_H */
diff --git a/version.m4 b/version.m4
index 2d1efff..da30815 100644
--- a/version.m4
+++ b/version.m4
@@ -1,4 +1,4 @@
 m4_define([AC_PACKAGE_NAME],[GNU Mach])
-m4_define([AC_PACKAGE_VERSION],[1.7+git20161202])
+m4_define([AC_PACKAGE_VERSION],[1.8+git20170102])
 m4_define([AC_PACKAGE_BUGREPORT],address@hidden)
 m4_define([AC_PACKAGE_TARNAME],[gnumach])
diff --git a/vm/vm_debug.c b/vm/vm_debug.c
index 47889ad..43221cf 100644
--- a/vm/vm_debug.c
+++ b/vm/vm_debug.c
@@ -156,8 +156,8 @@ mach_vm_region_info(
        regionp->vri_protection = entry->protection;
        regionp->vri_max_protection = entry->max_protection;
        regionp->vri_inheritance = entry->inheritance;
-       regionp->vri_wired_count = entry->wired_count;
-       regionp->vri_user_wired_count = entry->user_wired_count;
+       regionp->vri_wired_count = !!entry->wired_count; /* Doesn't stack */
+       regionp->vri_user_wired_count = regionp->vri_wired_count; /* Obsolete */
 
        object = entry->object.vm_object;
        *portp = vm_object_real_name(object);
diff --git a/vm/vm_map.c b/vm/vm_map.c
index 604177e..855d799 100644
--- a/vm/vm_map.c
+++ b/vm/vm_map.c
@@ -39,6 +39,7 @@
 #include <mach/port.h>
 #include <mach/vm_attributes.h>
 #include <mach/vm_param.h>
+#include <mach/vm_wire.h>
 #include <kern/assert.h>
 #include <kern/debug.h>
 #include <kern/kalloc.h>
@@ -68,14 +69,14 @@
  * wire count; it's used for map splitting and cache changing in
  * vm_map_copyout.
  */
-#define vm_map_entry_copy(NEW,OLD) \
-MACRO_BEGIN                                     \
-                *(NEW) = *(OLD);                \
-                (NEW)->is_shared = FALSE;      \
-                (NEW)->needs_wakeup = FALSE;    \
-                (NEW)->in_transition = FALSE;   \
-                (NEW)->wired_count = 0;         \
-                (NEW)->user_wired_count = 0;    \
+#define vm_map_entry_copy(NEW,OLD)                     \
+MACRO_BEGIN                                            \
+                *(NEW) = *(OLD);                       \
+                (NEW)->is_shared = FALSE;              \
+                (NEW)->needs_wakeup = FALSE;           \
+                (NEW)->in_transition = FALSE;          \
+                (NEW)->wired_count = 0;                        \
+                (NEW)->wired_access = VM_PROT_NONE;    \
 MACRO_END
 
 #define vm_map_entry_copy_full(NEW,OLD)        (*(NEW) = *(OLD))
@@ -184,7 +185,7 @@ void vm_map_setup(
        rbtree_init(&map->hdr.gap_tree);
 
        map->size = 0;
-       map->user_wired = 0;
+       map->size_wired = 0;
        map->ref_count = 1;
        map->pmap = pmap;
        map->min_offset = min;
@@ -744,7 +745,7 @@ restart:
        return entry;
 
 error:
-       printf("no more room in %p (%s) for allocating %u\n", map, map->name, 
size);
+       printf("no more room in %p (%s)\n", map, map->name);
        return NULL;
 }
 
@@ -809,8 +810,7 @@ kern_return_t vm_map_find_entry(
            (entry->inheritance == VM_INHERIT_DEFAULT) &&
            (entry->protection == VM_PROT_DEFAULT) &&
            (entry->max_protection == VM_PROT_ALL) &&
-           (entry->wired_count == 1) &&
-           (entry->user_wired_count == 0) &&
+           (entry->wired_count != 0) &&
            (entry->projected_on == 0)) {
                /*
                 *      Because this is a special case,
@@ -837,7 +837,7 @@ kern_return_t vm_map_find_entry(
                new_entry->protection = VM_PROT_DEFAULT;
                new_entry->max_protection = VM_PROT_ALL;
                new_entry->wired_count = 1;
-               new_entry->user_wired_count = 0;
+               new_entry->wired_access = VM_PROT_DEFAULT;
 
                new_entry->in_transition = FALSE;
                new_entry->needs_wakeup = FALSE;
@@ -1041,7 +1041,7 @@ kern_return_t vm_map_enter(
            (entry->inheritance == inheritance) &&
            (entry->protection == cur_protection) &&
            (entry->max_protection == max_protection) &&
-           (entry->wired_count == 0) &&  /* implies user_wired_count == 0 */
+           (entry->wired_count == 0) &&
            (entry->projected_on == 0)) {
                if (vm_object_coalesce(entry->object.vm_object,
                                VM_OBJECT_NULL,
@@ -1085,7 +1085,7 @@ kern_return_t vm_map_enter(
        new_entry->protection = cur_protection;
        new_entry->max_protection = max_protection;
        new_entry->wired_count = 0;
-       new_entry->user_wired_count = 0;
+       new_entry->wired_access = VM_PROT_NONE;
 
        new_entry->in_transition = FALSE;
        new_entry->needs_wakeup = FALSE;
@@ -1109,6 +1109,15 @@ kern_return_t vm_map_enter(
 
        SAVE_HINT(map, new_entry);
 
+       if (map->wiring_required) {
+               /* Returns with the map read-locked if successful */
+               result = vm_map_pageable(map, start, end, cur_protection, 
FALSE, FALSE);
+
+               if (result != KERN_SUCCESS) {
+                       RETURN(KERN_SUCCESS);
+               }
+       }
+
        vm_map_unlock(map);
 
        if ((object != VM_OBJECT_NULL) &&
@@ -1307,6 +1316,207 @@ kern_return_t vm_map_submap(
        return(result);
 }
 
+static void
+vm_map_entry_inc_wired(vm_map_t map, vm_map_entry_t entry)
+{
+       /*
+        * This member is a counter to indicate whether an entry
+        * should be faulted in (first time it is wired, wired_count
+        * goes from 0 to 1) or not (other times, wired_count goes
+        * from 1 to 2 or remains 2).
+        */
+       if (entry->wired_count > 1) {
+               return;
+       }
+
+       if (entry->wired_count == 0) {
+               map->size_wired += entry->vme_end - entry->vme_start;
+       }
+
+       entry->wired_count++;
+}
+
+static void
+vm_map_entry_reset_wired(vm_map_t map, vm_map_entry_t entry)
+{
+       if (entry->wired_count != 0) {
+               map->size_wired -= entry->vme_end - entry->vme_start;
+               entry->wired_count = 0;
+       }
+}
+
+/*
+ *     vm_map_pageable_scan: scan entries and update wiring as appropriate
+ *
+ *     This function is used by the VM system after either the wiring
+ *     access or protection of a mapping changes. It scans part or
+ *     all the entries of a map, and either wires, unwires, or skips
+ *     entries depending on their state.
+ *
+ *     The map must be locked. If wiring faults are performed, the lock
+ *     is downgraded to a read lock. The caller should always consider
+ *     the map read locked on return.
+ */
+static void
+vm_map_pageable_scan(struct vm_map *map,
+                    struct vm_map_entry *start,
+                    struct vm_map_entry *end)
+{
+       struct vm_map_entry *entry;
+       boolean_t do_wire_faults;
+
+       /*
+        * Pass 1. Update counters and prepare wiring faults.
+        */
+
+       do_wire_faults = FALSE;
+
+       for (entry = start; entry != end; entry = entry->vme_next) {
+
+               /*
+                * Unwiring.
+                *
+                * Note that unwiring faults can be performed while
+                * holding a write lock on the map. A wiring fault
+                * can only be done with a read lock.
+                */
+
+               if (entry->wired_access == VM_PROT_NONE) {
+                       if (entry->wired_count != 0) {
+                               vm_map_entry_reset_wired(map, entry);
+                               vm_fault_unwire(map, entry);
+                       }
+
+                       continue;
+               }
+
+               /*
+                * Wiring.
+                */
+
+               if (entry->protection == VM_PROT_NONE) {
+
+                       /*
+                        * Make sure entries that cannot be accessed
+                        * because of their protection aren't wired.
+                        */
+
+                       if (entry->wired_count == 0) {
+                               continue;
+                       }
+
+                       /*
+                        * This normally occurs after changing the protection of
+                        * a wired region to VM_PROT_NONE.
+                        */
+                       vm_map_entry_reset_wired(map, entry);
+                       vm_fault_unwire(map, entry);
+                       continue;
+               }
+
+               /*
+                *      We must do this in two passes:
+                *
+                *      1.  Holding the write lock, we create any shadow
+                *          or zero-fill objects that need to be created.
+                *          Then we increment the wiring count.
+                *
+                *      2.  We downgrade to a read lock, and call
+                *          vm_fault_wire to fault in the pages for any
+                *          newly wired area (wired_count is 1).
+                *
+                *      Downgrading to a read lock for vm_fault_wire avoids
+                *      a possible deadlock with another thread that may have
+                *      faulted on one of the pages to be wired (it would mark
+                *      the page busy, blocking us, then in turn block on the
+                *      map lock that we hold).  Because of problems in the
+                *      recursive lock package, we cannot upgrade to a write
+                *      lock in vm_map_lookup.  Thus, any actions that require
+                *      the write lock must be done beforehand.  Because we
+                *      keep the read lock on the map, the copy-on-write
+                *      status of the entries we modify here cannot change.
+                */
+
+               if (entry->wired_count == 0) {
+                       /*
+                        *      Perform actions of vm_map_lookup that need
+                        *      the write lock on the map: create a shadow
+                        *      object for a copy-on-write region, or an
+                        *      object for a zero-fill region.
+                        */
+                       if (entry->needs_copy &&
+                           ((entry->protection & VM_PROT_WRITE) != 0)) {
+                               vm_object_shadow(&entry->object.vm_object,
+                                                &entry->offset,
+                                                (vm_size_t)(entry->vme_end
+                                                            - 
entry->vme_start));
+                               entry->needs_copy = FALSE;
+                       }
+
+                       if (entry->object.vm_object == VM_OBJECT_NULL) {
+                               entry->object.vm_object =
+                                       vm_object_allocate(
+                                               (vm_size_t)(entry->vme_end
+                                                           - 
entry->vme_start));
+                               entry->offset = (vm_offset_t)0;
+                       }
+               }
+
+               vm_map_entry_inc_wired(map, entry);
+
+               if (entry->wired_count == 1) {
+                       do_wire_faults = TRUE;
+               }
+       }
+
+       /*
+        * Pass 2. Trigger wiring faults.
+        */
+
+       if (!do_wire_faults) {
+               return;
+       }
+
+       /*
+        * HACK HACK HACK HACK
+        *
+        * If we are wiring in the kernel map or a submap of it,
+        * unlock the map to avoid deadlocks.  We trust that the
+        * kernel threads are well-behaved, and therefore will
+        * not do anything destructive to this region of the map
+        * while we have it unlocked.  We cannot trust user threads
+        * to do the same.
+        *
+        * HACK HACK HACK HACK
+        */
+       if (vm_map_pmap(map) == kernel_pmap) {
+               vm_map_unlock(map); /* trust me ... */
+       } else {
+               vm_map_lock_set_recursive(map);
+               vm_map_lock_write_to_read(map);
+       }
+
+       for (entry = start; entry != end; entry = entry->vme_next) {
+               /*
+                * The wiring count can only be 1 if it was
+                * incremented by this function right before
+                * downgrading the lock.
+                */
+               if (entry->wired_count == 1) {
+                       /*
+                        * XXX This assumes that the faults always succeed.
+                        */
+                       vm_fault_wire(map, entry);
+               }
+       }
+
+       if (vm_map_pmap(map) == kernel_pmap) {
+               vm_map_lock(map);
+       } else {
+               vm_map_lock_clear_recursive(map);
+       }
+}
+
 /*
  *     vm_map_protect:
  *
@@ -1380,6 +1590,16 @@ kern_return_t vm_map_protect(
                        current->protection = new_prot;
 
                /*
+                *      Make sure the new protection doesn't conflict
+                *      with the desired wired access if any.
+                */
+
+               if ((current->protection != VM_PROT_NONE) &&
+                   (current->wired_access != VM_PROT_NONE)) {
+                       current->wired_access = current->protection;
+               }
+
+               /*
                 *      Update physical map if necessary.
                 */
 
@@ -1391,6 +1611,9 @@ kern_return_t vm_map_protect(
                current = current->vme_next;
        }
 
+       /* Returns with the map read-locked if successful */
+       vm_map_pageable_scan(map, entry, current);
+
        vm_map_unlock(map);
        return(KERN_SUCCESS);
 }
@@ -1436,7 +1659,7 @@ kern_return_t vm_map_inherit(
 }
 
 /*
- *     vm_map_pageable_common:
+ *     vm_map_pageable:
  *
  *     Sets the pageability of the specified address
  *     range in the target map.  Regions specified
@@ -1446,263 +1669,153 @@ kern_return_t vm_map_inherit(
  *     This is checked against protection of memory being locked-down.
  *     access_type of VM_PROT_NONE makes memory pageable.
  *
- *     The map must not be locked, but a reference
- *     must remain to the map throughout the call.
+ *     If lock_map is TRUE, the map is locked and unlocked
+ *     by this function. Otherwise, it is assumed the caller
+ *     already holds the lock, in which case the function
+ *     returns with the lock downgraded to a read lock if successful.
  *
- *     Callers should use macros in vm/vm_map.h (i.e. vm_map_pageable,
- *     or vm_map_pageable_user); don't call vm_map_pageable directly.
+ *     If check_range is TRUE, this function fails if it finds
+ *     holes or protection mismatches in the specified range.
+ *
+ *     A reference must remain to the map throughout the call.
  */
-kern_return_t vm_map_pageable_common(
+
+kern_return_t vm_map_pageable(
        vm_map_t        map,
        vm_offset_t     start,
        vm_offset_t     end,
        vm_prot_t       access_type,
-       boolean_t       user_wire)
+       boolean_t       lock_map,
+       boolean_t       check_range)
 {
        vm_map_entry_t          entry;
        vm_map_entry_t          start_entry;
+       vm_map_entry_t          end_entry;
 
-       vm_map_lock(map);
+       if (lock_map) {
+               vm_map_lock(map);
+       }
 
        VM_MAP_RANGE_CHECK(map, start, end);
 
-       if (vm_map_lookup_entry(map, start, &start_entry)) {
-               entry = start_entry;
-               /*
-                *      vm_map_clip_start will be done later.
-                */
-       }
-       else {
+       if (!vm_map_lookup_entry(map, start, &start_entry)) {
                /*
                 *      Start address is not in map; this is fatal.
                 */
-               vm_map_unlock(map);
-               return(KERN_FAILURE);
+               if (lock_map) {
+                       vm_map_unlock(map);
+               }
+
+               return KERN_NO_SPACE;
        }
 
        /*
-        *      Actions are rather different for wiring and unwiring,
-        *      so we have two separate cases.
+        * Pass 1. Clip entries, check for holes and protection mismatches
+        * if requested.
         */
 
-       if (access_type == VM_PROT_NONE) {
+       vm_map_clip_start(map, start_entry, start);
 
-               vm_map_clip_start(map, entry, start);
+       for (entry = start_entry;
+            (entry != vm_map_to_entry(map)) &&
+            (entry->vme_start < end);
+            entry = entry->vme_next) {
+               vm_map_clip_end(map, entry, end);
 
-               /*
-                *      Unwiring.  First ensure that the range to be
-                *      unwired is really wired down.
-                */
-               while ((entry != vm_map_to_entry(map)) &&
-                      (entry->vme_start < end)) {
-
-                   if ((entry->wired_count == 0) ||
-                       ((entry->vme_end < end) &&
-                        ((entry->vme_next == vm_map_to_entry(map)) ||
-                         (entry->vme_next->vme_start > entry->vme_end))) ||
-                       (user_wire && (entry->user_wired_count == 0))) {
-                           vm_map_unlock(map);
-                           return(KERN_INVALID_ARGUMENT);
-                   }
-                   entry = entry->vme_next;
+               if (check_range &&
+                   (((entry->vme_end < end) &&
+                     ((entry->vme_next == vm_map_to_entry(map)) ||
+                      (entry->vme_next->vme_start > entry->vme_end))) ||
+                    ((entry->protection & access_type) != access_type))) {
+                       if (lock_map) {
+                               vm_map_unlock(map);
+                       }
+
+                       return KERN_NO_SPACE;
                }
+       }
 
-               /*
-                *      Now decrement the wiring count for each region.
-                *      If a region becomes completely unwired,
-                *      unwire its physical pages and mappings.
-                */
-               entry = start_entry;
-               while ((entry != vm_map_to_entry(map)) &&
-                      (entry->vme_start < end)) {
-                   vm_map_clip_end(map, entry, end);
-
-                   if (user_wire) {
-                       if (--(entry->user_wired_count) == 0)
-                       {
-                           map->user_wired -= entry->vme_end - 
entry->vme_start;
-                           entry->wired_count--;
-                       }
-                   }
-                   else {
-                       entry->wired_count--;
-                   }
+       end_entry = entry;
 
-                   if (entry->wired_count == 0)
-                       vm_fault_unwire(map, entry);
+       /*
+        * Pass 2. Set the desired wired access.
+        */
 
-                   entry = entry->vme_next;
-               }
+       for (entry = start_entry; entry != end_entry; entry = entry->vme_next) {
+               entry->wired_access = access_type;
        }
 
-       else {
-               /*
-                *      Wiring.  We must do this in two passes:
-                *
-                *      1.  Holding the write lock, we create any shadow
-                *          or zero-fill objects that need to be created.
-                *          Then we clip each map entry to the region to be
-                *          wired and increment its wiring count.  We
-                *          create objects before clipping the map entries
-                *          to avoid object proliferation.
-                *
-                *      2.  We downgrade to a read lock, and call
-                *          vm_fault_wire to fault in the pages for any
-                *          newly wired area (wired_count is 1).
-                *
-                *      Downgrading to a read lock for vm_fault_wire avoids
-                *      a possible deadlock with another thread that may have
-                *      faulted on one of the pages to be wired (it would mark
-                *      the page busy, blocking us, then in turn block on the
-                *      map lock that we hold).  Because of problems in the
-                *      recursive lock package, we cannot upgrade to a write
-                *      lock in vm_map_lookup.  Thus, any actions that require
-                *      the write lock must be done beforehand.  Because we
-                *      keep the read lock on the map, the copy-on-write
-                *      status of the entries we modify here cannot change.
-                */
+       /* Returns with the map read-locked */
+       vm_map_pageable_scan(map, start_entry, end_entry);
 
-               /*
-                *      Pass 1.
-                */
-               while ((entry != vm_map_to_entry(map)) &&
-                      (entry->vme_start < end)) {
-                   vm_map_clip_end(map, entry, end);
+       if (lock_map) {
+               vm_map_unlock(map);
+       }
 
-                   if (entry->wired_count == 0) {
+       return(KERN_SUCCESS);
+}
 
-                       /*
-                        *      Perform actions of vm_map_lookup that need
-                        *      the write lock on the map: create a shadow
-                        *      object for a copy-on-write region, or an
-                        *      object for a zero-fill region.
-                        */
-                       if (entry->needs_copy &&
-                           ((entry->protection & VM_PROT_WRITE) != 0)) {
+/*
+ *     vm_map_pageable_all:
+ *
+ *     Sets the pageability of an entire map. If the VM_WIRE_CURRENT
+ *     flag is set, then all current mappings are locked down. If the
+ *     VM_WIRE_FUTURE flag is set, then all mappings created after the
+ *     call returns are locked down. If no flags are passed
+ *     (i.e. VM_WIRE_NONE), all mappings become pageable again, and
+ *     future mappings aren't automatically locked down any more.
+ *
+ *     The access type of the mappings match their current protection.
+ *     Null mappings (with protection PROT_NONE) are updated to track
+ *     that they should be wired in case they become accessible.
+ */
+kern_return_t
+vm_map_pageable_all(struct vm_map *map, vm_wire_t flags)
+{
+       boolean_t wiring_required;
+       kern_return_t kr;
 
-                               vm_object_shadow(&entry->object.vm_object,
-                                               &entry->offset,
-                                               (vm_size_t)(entry->vme_end
-                                                       - entry->vme_start));
-                               entry->needs_copy = FALSE;
-                       }
-                       if (entry->object.vm_object == VM_OBJECT_NULL) {
-                               entry->object.vm_object =
-                                       vm_object_allocate(
-                                           (vm_size_t)(entry->vme_end
-                                                       - entry->vme_start));
-                               entry->offset = (vm_offset_t)0;
-                       }
-                   }
-                   vm_map_clip_start(map, entry, start);
-                   vm_map_clip_end(map, entry, end);
-
-                   if (user_wire) {
-                       if ((entry->user_wired_count)++ == 0)
-                       {
-                           map->user_wired += entry->vme_end - 
entry->vme_start;
-                           entry->wired_count++;
-                       }
-                   }
-                   else {
-                       entry->wired_count++;
-                   }
+       if ((flags & ~VM_WIRE_ALL) != 0) {
+               return KERN_INVALID_ARGUMENT;
+       }
 
-                   /*
-                    *  Check for holes and protection mismatch.
-                    *  Holes: Next entry should be contiguous unless
-                    *          this is the end of the region.
-                    *  Protection: Access requested must be allowed.
-                    */
-                   if (((entry->vme_end < end) &&
-                        ((entry->vme_next == vm_map_to_entry(map)) ||
-                         (entry->vme_next->vme_start > entry->vme_end))) ||
-                       ((entry->protection & access_type) != access_type)) {
-                           /*
-                            *  Found a hole or protection problem.
-                            *  Object creation actions
-                            *  do not need to be undone, but the
-                            *  wired counts need to be restored.
-                            */
-                           while ((entry != vm_map_to_entry(map)) &&
-                               (entry->vme_end > start)) {
-                                   if (user_wire) {
-                                       if (--(entry->user_wired_count) == 0)
-                                       {
-                                           map->user_wired -= entry->vme_end - 
entry->vme_start;
-                                           entry->wired_count--;
-                                       }
-                                   }
-                                   else {
-                                      entry->wired_count--;
-                                   }
+       vm_map_lock(map);
 
-                                   entry = entry->vme_prev;
-                           }
+       if (flags == VM_WIRE_NONE) {
+               map->wiring_required = FALSE;
 
-                           vm_map_unlock(map);
-                           return(KERN_FAILURE);
-                   }
-                   entry = entry->vme_next;
-               }
+               /* Returns with the map read-locked if successful */
+               kr = vm_map_pageable(map, map->min_offset, map->max_offset,
+                                    VM_PROT_NONE, FALSE, FALSE);
+               vm_map_unlock(map);
+               return kr;
+       }
 
-               /*
-                *      Pass 2.
-                */
+       wiring_required = map->wiring_required;
 
-               /*
-                * HACK HACK HACK HACK
-                *
-                * If we are wiring in the kernel map or a submap of it,
-                * unlock the map to avoid deadlocks.  We trust that the
-                * kernel threads are well-behaved, and therefore will
-                * not do anything destructive to this region of the map
-                * while we have it unlocked.  We cannot trust user threads
-                * to do the same.
-                *
-                * HACK HACK HACK HACK
-                */
-               if (vm_map_pmap(map) == kernel_pmap) {
-                   vm_map_unlock(map);         /* trust me ... */
-               }
-               else {
-                   vm_map_lock_set_recursive(map);
-                   vm_map_lock_write_to_read(map);
-               }
+       if (flags & VM_WIRE_FUTURE) {
+               map->wiring_required = TRUE;
+       }
 
-               entry = start_entry;
-               while (entry != vm_map_to_entry(map) &&
-                       entry->vme_start < end) {
-                   /*
-                    *  Wiring cases:
-                    *      Kernel: wired == 1 && user_wired == 0
-                    *      User:   wired == 1 && user_wired == 1
-                    *
-                    *  Don't need to wire if either is > 1.  wired = 0 &&
-                    *  user_wired == 1 can't happen.
-                    */
+       if (flags & VM_WIRE_CURRENT) {
+               /* Returns with the map read-locked if successful */
+               kr = vm_map_pageable(map, map->min_offset, map->max_offset,
+                                    VM_PROT_READ | VM_PROT_WRITE,
+                                    FALSE, FALSE);
 
-                   /*
-                    *  XXX This assumes that the faults always succeed.
-                    */
-                   if ((entry->wired_count == 1) &&
-                       (entry->user_wired_count <= 1)) {
-                           vm_fault_wire(map, entry);
-                   }
-                   entry = entry->vme_next;
-               }
+               if (kr != KERN_SUCCESS) {
+                       if (flags & VM_WIRE_FUTURE) {
+                               map->wiring_required = wiring_required;
+                       }
 
-               if (vm_map_pmap(map) == kernel_pmap) {
-                   vm_map_lock(map);
-               }
-               else {
-                   vm_map_lock_clear_recursive(map);
+                       vm_map_unlock(map);
+                       return kr;
                }
        }
 
        vm_map_unlock(map);
 
-       return(KERN_SUCCESS);
+       return KERN_SUCCESS;
 }
 
 /*
@@ -1744,11 +1857,8 @@ void vm_map_entry_delete(
             */
 
            if (entry->wired_count != 0) {
+               vm_map_entry_reset_wired(map, entry);
                vm_fault_unwire(map, entry);
-               entry->wired_count = 0;
-               if (entry->user_wired_count)
-                   map->user_wired -= entry->vme_end - entry->vme_start;
-               entry->user_wired_count = 0;
            }
 
            /*
@@ -2392,10 +2502,7 @@ start_pass_1:
                        entry->object = copy_entry->object;
                        entry->offset = copy_entry->offset;
                        entry->needs_copy = copy_entry->needs_copy;
-                       entry->wired_count = 0;
-                       if (entry->user_wired_count)
-                           dst_map->user_wired -= entry->vme_end - 
entry->vme_start;
-                       entry->user_wired_count = 0;
+                       vm_map_entry_reset_wired(dst_map, entry);
 
                        vm_map_copy_entry_unlink(copy, copy_entry);
                        vm_map_copy_entry_dispose(copy, copy_entry);
@@ -2571,6 +2678,7 @@ kern_return_t vm_map_copyout(
        vm_offset_t     vm_copy_start;
        vm_map_entry_t  last;
        vm_map_entry_t  entry;
+       kern_return_t   kr;
 
        /*
         *      Check for null copy object.
@@ -2590,7 +2698,6 @@ kern_return_t vm_map_copyout(
                vm_object_t object = copy->cpy_object;
                vm_size_t offset = copy->offset;
                vm_size_t tmp_size = copy->size;
-               kern_return_t kr;
 
                *dst_addr = 0;
                kr = vm_map_enter(dst_map, dst_addr, tmp_size,
@@ -2730,11 +2837,19 @@ kern_return_t vm_map_copyout(
 
        vm_map_copy_insert(dst_map, last, copy);
 
-       vm_map_unlock(dst_map);
+       if (dst_map->wiring_required) {
+               /* Returns with the map read-locked if successful */
+               kr = vm_map_pageable(dst_map, start, start + size,
+                                    VM_PROT_READ | VM_PROT_WRITE,
+                                    FALSE, FALSE);
 
-       /*
-        * XXX  If wiring_required, call vm_map_pageable
-        */
+               if (kr != KERN_SUCCESS) {
+                       vm_map_unlock(dst_map);
+                       return kr;
+               }
+       }
+
+       vm_map_unlock(dst_map);
 
        return(KERN_SUCCESS);
 }
@@ -2812,9 +2927,8 @@ kern_return_t vm_map_copyout_page_list(
            last->inheritance != VM_INHERIT_DEFAULT ||
            last->protection != VM_PROT_DEFAULT ||
            last->max_protection != VM_PROT_ALL ||
-           (must_wire ? (last->wired_count != 1 ||
-                   last->user_wired_count != 1) :
-               (last->wired_count != 0))) {
+           (must_wire ? (last->wired_count == 0)
+                      : (last->wired_count != 0))) {
                    goto create_object;
        }
 
@@ -2906,14 +3020,13 @@ create_object:
        entry->is_shared = FALSE;
        entry->is_sub_map = FALSE;
        entry->needs_copy = FALSE;
+       entry->wired_count = 0;
 
        if (must_wire) {
-               entry->wired_count = 1;
-               dst_map->user_wired += entry->vme_end - entry->vme_start;
-               entry->user_wired_count = 1;
+               vm_map_entry_inc_wired(dst_map, entry);
+               entry->wired_access = VM_PROT_DEFAULT;
        } else {
-               entry->wired_count = 0;
-               entry->user_wired_count = 0;
+               entry->wired_access = VM_PROT_NONE;
        }
 
        entry->in_transition = TRUE;
@@ -4009,10 +4122,7 @@ retry:
                                                src_start + src_size);
 
                                        assert(src_entry->wired_count > 0);
-                                       src_entry->wired_count = 0;
-                                       if (src_entry->user_wired_count)
-                                           src_map->user_wired -= 
src_entry->vme_end - src_entry->vme_start;
-                                       src_entry->user_wired_count = 0;
+                                       vm_map_entry_reset_wired(src_map, 
src_entry);
                                        unwire_end = src_entry->vme_end;
                                        pmap_pageable(vm_map_pmap(src_map),
                                            page_vaddr, unwire_end, TRUE);
@@ -4694,7 +4804,6 @@ void vm_map_simplify(
                (prev_entry->protection == this_entry->protection) &&
                (prev_entry->max_protection == this_entry->max_protection) &&
                (prev_entry->wired_count == this_entry->wired_count) &&
-               (prev_entry->user_wired_count == this_entry->user_wired_count) 
&&
 
                (prev_entry->needs_copy == this_entry->needs_copy) &&
 
@@ -4773,7 +4882,9 @@ void vm_map_print(db_expr_t addr, boolean_t have_addr, 
db_expr_t count, const ch
 
        iprintf("Map 0x%X: name=\"%s\", pmap=0x%X,",
                (vm_offset_t) map, map->name, (vm_offset_t) (map->pmap));
-        printf("ref=%d,nentries=%d,", map->ref_count, map->hdr.nentries);
+        printf("ref=%d,nentries=%d\n", map->ref_count, map->hdr.nentries);
+        printf("size=%lu,resident:%lu,wired=%lu\n", map->size,
+               pmap_resident_count(map->pmap) * PAGE_SIZE, map->size_wired);
         printf("version=%d\n", map->timestamp);
        indent += 1;
        for (entry = vm_map_first_entry(map);
@@ -4789,13 +4900,7 @@ void vm_map_print(db_expr_t addr, boolean_t have_addr, 
db_expr_t count, const ch
                        entry->max_protection,
                        inheritance_name[entry->inheritance]);
                if (entry->wired_count != 0) {
-                       printf("wired(");
-                       if (entry->user_wired_count != 0)
-                               printf("u");
-                       if (entry->wired_count >
-                           ((entry->user_wired_count == 0) ? 0 : 1))
-                               printf("k");
-                       printf(") ");
+                       printf("wired, ");
                }
                if (entry->in_transition) {
                        printf("in transition");
diff --git a/vm/vm_map.h b/vm/vm_map.h
index 537c36e..87660f3 100644
--- a/vm/vm_map.h
+++ b/vm/vm_map.h
@@ -46,6 +46,7 @@
 #include <mach/vm_attributes.h>
 #include <mach/vm_prot.h>
 #include <mach/vm_inherit.h>
+#include <mach/vm_wire.h>
 #include <vm/pmap.h>
 #include <vm/vm_object.h>
 #include <vm/vm_page.h>
@@ -129,7 +130,9 @@ struct vm_map_entry {
        vm_prot_t               max_protection; /* maximum protection */
        vm_inherit_t            inheritance;    /* inheritance */
        unsigned short          wired_count;    /* can be paged if = 0 */
-       unsigned short          user_wired_count; /* for vm_wire */
+       vm_prot_t               wired_access;   /* wiring access types, as 
accepted
+                                                  by vm_map_pageable; used on 
wiring
+                                                  scans when protection != 
VM_PROT_NONE */
        struct vm_map_entry     *projected_on;  /* 0 for normal map entry
            or persistent kernel map projected buffer entry;
            -1 for non-persistent kernel map projected buffer entry;
@@ -179,7 +182,7 @@ struct vm_map {
 #define max_offset             hdr.links.end   /* end of range */
        pmap_t                  pmap;           /* Physical map */
        vm_size_t               size;           /* virtual size */
-       vm_size_t               user_wired;     /* wired by user size */
+       vm_size_t               size_wired;     /* wired size */
        int                     ref_count;      /* Reference count */
        decl_simple_lock_data(, ref_lock)       /* Lock for ref_count field */
        vm_map_entry_t          hint;           /* hint for quick lookups */
@@ -189,7 +192,7 @@ struct vm_map {
        /* Flags */
        unsigned int    wait_for_space:1,       /* Should callers wait
                                                   for space? */
-       /* boolean_t */ wiring_required:1;      /* All memory wired? */
+       /* boolean_t */ wiring_required:1;      /* New mappings are wired? */
 
        unsigned int            timestamp;      /* Version number */
 
@@ -485,17 +488,12 @@ static inline void vm_map_set_name(vm_map_t map, const 
char *name)
                                                 * a verified lookup is
                                                 * now complete */
 /*
- *     Pageability functions.  Includes macro to preserve old interface.
+ *     Pageability functions.
  */
-extern kern_return_t   vm_map_pageable_common(vm_map_t, vm_offset_t,
-                                              vm_offset_t, vm_prot_t,
-                                              boolean_t);
+extern kern_return_t   vm_map_pageable(vm_map_t, vm_offset_t, vm_offset_t,
+                                       vm_prot_t, boolean_t, boolean_t);
 
-#define vm_map_pageable(map, s, e, access)     \
-               vm_map_pageable_common(map, s, e, access, FALSE)
-
-#define vm_map_pageable_user(map, s, e, access)        \
-               vm_map_pageable_common(map, s, e, access, TRUE)
+extern kern_return_t   vm_map_pageable_all(vm_map_t, vm_wire_t);
 
 /*
  *     Submap object.  Must be used to create memory to be put
diff --git a/vm/vm_page.c b/vm/vm_page.c
index 92e36a1..9a7fa27 100644
--- a/vm/vm_page.c
+++ b/vm/vm_page.c
@@ -44,6 +44,7 @@
 #include <mach/vm_param.h>
 #include <machine/pmap.h>
 #include <sys/types.h>
+#include <vm/memory_object.h>
 #include <vm/vm_page.h>
 #include <vm/vm_pageout.h>
 
@@ -1087,13 +1088,13 @@ vm_page_seg_evict(struct vm_page_seg *seg, boolean_t 
external_only,
                   boolean_t alloc_paused)
 {
     struct vm_page *page;
-    boolean_t reclaim, laundry;
+    boolean_t reclaim, double_paging;
     vm_object_t object;
     boolean_t was_active;
 
     page = NULL;
     object = NULL;
-    laundry = FALSE;
+    double_paging = FALSE;
 
 restart:
     vm_page_lock_queues();
@@ -1147,16 +1148,27 @@ restart:
      * processing of this page since it's immediately going to be
      * double paged out to the default pager. The laundry bit is
      * reset and the page is inserted into an internal object by
-     * vm_pageout_setup before the double paging pass.
+     * vm_pageout_setup before the second double paging pass.
+     *
+     * There is one important special case: the default pager can
+     * back external memory objects. When receiving the first
+     * pageout request, where the page is no longer present, a
+     * fault could occur, during which the map would be locked.
+     * This fault would cause a new paging request to the default
+     * pager. Receiving that request would deadlock when trying to
+     * lock the map again. Instead, the page isn't double paged
+     * and vm_pageout_setup wires the page down, trusting the
+     * default pager as for internal pages.
      */
 
     assert(!page->laundry);
-    assert(!(laundry && page->external));
+    assert(!(double_paging && page->external));
 
-    if (object->internal || !alloc_paused) {
-        laundry = FALSE;
+    if (object->internal || !alloc_paused ||
+        memory_manager_default_port(object->pager)) {
+        double_paging = FALSE;
     } else {
-        laundry = page->laundry = TRUE;
+        double_paging = page->laundry = TRUE;
     }
 
 out:
@@ -1203,7 +1215,7 @@ out:
     vm_pageout_page(page, FALSE, TRUE); /* flush it */
     vm_object_unlock(object);
 
-    if (laundry) {
+    if (double_paging) {
         goto restart;
     }
 
diff --git a/vm/vm_pageout.c b/vm/vm_pageout.c
index 7dc9c12..575a9f5 100644
--- a/vm/vm_pageout.c
+++ b/vm/vm_pageout.c
@@ -47,6 +47,7 @@
 #include <kern/task.h>
 #include <kern/thread.h>
 #include <kern/printf.h>
+#include <vm/memory_object.h>
 #include <vm/pmap.h>
 #include <vm/vm_map.h>
 #include <vm/vm_object.h>
@@ -253,7 +254,8 @@ vm_pageout_setup(
 
                assert(!old_object->internal);
                m->laundry = FALSE;
-       } else if (old_object->internal) {
+       } else if (old_object->internal ||
+                  memory_manager_default_port(old_object->pager)) {
                m->laundry = TRUE;
                vm_page_laundry_count++;
 
@@ -470,7 +472,7 @@ void vm_pageout(void)
                                     FALSE);
                } else if (should_wait) {
                        assert_wait(&vm_pageout_continue, FALSE);
-                       thread_set_timeout(VM_PAGEOUT_TIMEOUT);
+                       thread_set_timeout(VM_PAGEOUT_TIMEOUT * hz / 1000);
                        simple_unlock(&vm_page_queue_free_lock);
                        thread_block(NULL);
 
diff --git a/vm/vm_user.c b/vm/vm_user.c
index 7fc0fe8..6c1e3d6 100644
--- a/vm/vm_user.c
+++ b/vm/vm_user.c
@@ -441,11 +441,41 @@ kern_return_t vm_wire(port, map, start, size, access)
                return(KERN_INVALID_ARGUMENT);
 
        /* TODO: make it tunable */
-       if (!priv && access != VM_PROT_NONE && map->user_wired + size > 65536)
+       if (!priv && access != VM_PROT_NONE && map->size_wired + size > 65536)
                return KERN_NO_ACCESS;
 
-       return vm_map_pageable_user(map,
-                                   trunc_page(start),
-                                   round_page(start+size),
-                                   access);
+       return vm_map_pageable(map, trunc_page(start), round_page(start+size),
+                              access, TRUE, TRUE);
+}
+
+kern_return_t vm_wire_all(const ipc_port_t port, vm_map_t map, vm_wire_t flags)
+{
+       if (!IP_VALID(port))
+               return KERN_INVALID_HOST;
+
+       ip_lock(port);
+
+       if (!ip_active(port)
+           || (ip_kotype(port) != IKOT_HOST_PRIV)) {
+               ip_unlock(port);
+               return KERN_INVALID_HOST;
+       }
+
+       ip_unlock(port);
+
+       if (map == VM_MAP_NULL) {
+               return KERN_INVALID_TASK;
+       }
+
+       if (flags & ~VM_WIRE_ALL) {
+               return KERN_INVALID_ARGUMENT;
+       }
+
+       /*Check if range includes projected buffer;
+         user is not allowed direct manipulation in that case*/
+       if (projected_buffer_in_range(map, map->min_offset, map->max_offset)) {
+               return KERN_INVALID_ARGUMENT;
+       }
+
+       return vm_map_pageable_all(map, flags);
 }

-- 
Alioth's /usr/local/bin/git-commit-notice on 
/srv/git.debian.org/git/pkg-hurd/gnumach.git



reply via email to

[Prev in Thread] Current Thread [Next in Thread]