# # # delete "wiki/DatabaseCompaction.moin" # # delete "wiki/MonotoneAndSSHAgent.moin" # # add_file "wiki/DatabaseCompaction.mdwn" # content [4570ae11de855036125adef9f7bb8def5a7ea801] # # add_file "wiki/MonotoneAndSSHAgent.mdwn" # content [b53b46f3c2e6215c723aa94fc7c41b6ce4ae694f] # # patch "local.css" # from [480590a474c45b6e3ed552d2f8d21261a1d099be] # to [eb16bc79ab3be8ec820254aa523274cd2970f2bc] # # patch "wiki/AutomateWishlist.mdwn" # from [7bd986c014f1219ccb5022bbd56f7f3fa07823f8] # to [7167097769b1caea9f204f18175053611330840d] # # patch "wiki/Building/Windows/VisualC8.mdwn" # from [1a834e5b700ecd717828d5f016a6d7f834ce38f5] # to [c14fc83752478ffb9a1778980aa6a00a37dcd7bc] # # patch "wiki/CreatingBranches.mdwn" # from [10d4b5e74de668bfecf8c3957ad1810e89b5d38e] # to [d96d1c605a39b99b5edbe90179aad04d70ede818] # # patch "wiki/MonotoneOnDebian.mdwn" # from [3be4b421940bc6a645e02932875d9fc1caef8c5e] # to [df31c0ce9573084063ea3230cf17df3ac0ed8fd5] # # patch "wiki/NotesOnTestingChangesetify.mdwn" # from [360c2233b395292730f095ad1de8403287d1329d] # to [d9dea40bdb9affb4865aa5357739363245e4fe7f] # # patch "wiki/SelfHostingInfo.mdwn" # from [7507ffb9e3c9e2e679e1a5ec0dc81d93fdf567e9] # to [0d894fd19e7042bbd95b23b7fc86aa76aa847932] # # patch "wiki/TestIntro.mdwn" # from [3f8c1eea5f36618fb4a32cb423680b125dd05d18] # to [2ae4b8f2d7ab0d6c9ee613f7ef0eac362bac2a77] # # patch "wiki/VersionedPolicy/Graydon.mdwn" # from [22266864c4af8611753ebb7eccf7ecd9bab95a2a] # to [d5882fd1b6f323eb964183a8503ea8d65955c7e8] # ============================================================ --- wiki/DatabaseCompaction.mdwn 4570ae11de855036125adef9f7bb8def5a7ea801 +++ wiki/DatabaseCompaction.mdwn 4570ae11de855036125adef9f7bb8def5a7ea801 @@ -0,0 +1,47 @@ +[[tag migration-done]] + +We do a pretty good job of storing the file data compactly on disk, but we should do a better job of storing metadata compactly too. Here are some ways we could do that, in ascending order of invasiveness / controversiality: + +## Compact heights + +Heights are stored on disk as arrays of four-byte integers. As almost all entries in these arrays are small numbers, a variable-length representation would be a win, especially if it preserves the property of being able to use `memcmp` to compare them. I am 90% sure that a concatenated sequence of integers in the SQLite variable-length integer representation has this property. A sequence of ULEB128 integers does not, because ULEB128 encoded values are little-endian. This change can be totally invisible outside `rev_height.*`. + +## Put heights in the revisions table + +I'm not sure whether this is a good idea. There is exactly one height for every revision, so storing all the heights in the revisions table would be a correct thing, and would probably take less space on disk. However, there are situations where we have to throw away all the heights and rebuild them (notably, with [[PartialPull]], horizon moves). It may be more efficient to keep them in a separate table so we can do `DELETE FROM heights; ` rather than `UPDATE revisions SET height=NULL; `. Also, not every revision lookup needs to see the height, so we may get better disk cache behavior from keeping the heights on the side. This change would be invisible outside `database.cc`. + +## Use revision rowids in the revision_ancestry table + +The revision_ancestry table's schema currently reads like so: + + + CREATE TABLE revision_ancestry + ( + parent not null, -- joins with revisions.id + child not null, -- joins with revisions.id + unique(parent, child) + ); + +where *parent* and *child* are both SHA1 values stored as binary strings, joining (as it says) with the "id" field of the revisions table. We could instead turn them into `INTEGER`s and have them join with sqlite's internal `ROWID`s. This change could be confined to database.cc at the price of having to join this table against `revisions` on every access, or else we could make a globally-pervasive change that ceases to use the SHA1 binary strings as cookies for revisions internally (using the `ROWID`s instead). This would make more sense if we also ... + +## Use revision rowids in other tables that join with revisions.id + +This concept can also be applied to the tables `heights`, `rosters`, `roster_deltas`, and `revision_certs`. Note that the IDs stored in `rosters.id` and `roster_deltas.id` are actually the associated *revision* hashes. + +## Use rowids for all foreign keys + +Other columns that are joined-with (at least notionally; we don't use sql joins much) and contain SHA1s are `files.id`/`file_deltas.id` (`file_deltas.base`) and `public_keys.id` (`revision_certs.keypair`). + +## Put all the SHA1s in a lookaside table + +At present just about every SHA1 value we have is stored at least twice, once as the actual hash of some blob, and one or more times as a pointer in some other data structure. We could put them all in a lookaside table, and use the `ROWID` in that table everywhere they appear now. We could then turn all the fields that point into that table into `INTEGER PRIMARY KEY`s and have SQLite collapse them into the `ROWID`. (This is basically an extra fillip on "Use rowids for all foreign keys" above.) + +# More radical changes + +## Compact revision/roster format + +Define a new on-disk format for revisions/rosters which is not textual and can be stored/queried more efficiently? + +## Experiment with other compression algorithms + +bzip2, p7zip, lzma... ============================================================ --- wiki/MonotoneAndSSHAgent.mdwn b53b46f3c2e6215c723aa94fc7c41b6ce4ae694f +++ wiki/MonotoneAndSSHAgent.mdwn b53b46f3c2e6215c723aa94fc7c41b6ce4ae694f @@ -0,0 +1,64 @@ +[[tag migration-wip]] + + +# Using SSH-Agent with Monotone + +Monotone now has support for using ssh-agent to manage your monotone key as of 0.34. What this means is that you can type the password for your key once instead of every time that you run mtn. Intrigued? Here's how to do it: + +## Automatic + +There are 2 ways to use ssh-agent with monotone. The easiest is to simply start up ssh-agent and let monotone add your key to the agent. This is as simple as running `ssh-agent /bin/bash` (replace /bin/bash with whichever shell you wish) and then work as normal. The first time monotone needs to sign something (such as when checking in a change) it will ask for your password then automatically add your key to ssh-agent. You can see this happening by running `ssh-add -l`. When you first start ssh-agent it will say that there are no keys. After monotone adds your key it will list the fingerprint of your key with the e-mail address you gave for the key's name. + +## Manual + +You can also manually add your key to ssh-agent for use with monotone if you so desire. Run `mtn ssh_agent_export ~/.ssh/id_monotone`, type your key's password then enter a new password twice (or the same one, it's up to you), run `chmod 600 ~/.ssh/id_monotone` then run `ssh-add ~/.ssh/id_monotone` after starting ssh-agent as above. As before, monotone will automatically use your monotone key in ssh-agent and not ask for your password as long as you're logged in. After this all you will have to do is run `ssh-add ~/.ssh/id_monotone` after you start ssh-agent in the future and your key will be added. + +## FAQ + +### What if I have more keys in ssh-agent? + +ssh-agent is generally used for holding private keys for use with ssh (as its name implies) so you may have some ssh keys in ssh-agent already. Monotone will only ever use a key that it also has in its key store (~/.monotone/keys). It will look through all of the keys in ssh-agent and only use one that matches the key you have chosen for use with monotone (`--key` option). Other keys will not be used. + +### Can I use the same key for SSH and monotone? + +This is technically possible but it would currently require you to create your key in monotone, use `mtn ssh_agent_export ` to export your key, extract the public key part for use in ~/.ssh/authorized_keys on remote hosts, and add your key to ssh-agent before sshing to them. In general this isn't worth the effort and it also just makes some sense to use different keys for these two very different purposes. + +### How can I make monotone not use ssh-agent? + +Pass the `--ssh-sign=no ` switch. + +### Why does mtn serve ask for my password even though my key is added to ssh-agent? + +The serving code does some encryption that does not fit into ssh-agent's signing model and so requires you to enter your password so that monotone can use the key directly. This should change in time as monotone's security is changed. + +## ssh-agent and screen + +This does not directly apply to monotone but is useful for those developers who use screen heavily. If you add the following to your .bash_profile your screen sessions will be able to use a forwarded ssh-agent when you reattach from a new ssh session: + if [ -n "$SSH_AUTH_SOCK" ]; then + screen_ssh_agent="/tmp/${USER}-screen-ssh-agent.sock" + if [ ${STY} ]; then + if [ -e ${screen_ssh_agent} ]; then + export SSH_AUTH_SOCK=${screen_ssh_agent} + fi + else + ln -snf ${SSH_AUTH_SOCK} ${screen_ssh_agent} + fi + fi +(copied from the third comment on http://woss.name/2005/08/17/using-ssh-agent-and-screen-together/) + +## Mac OS X and ssh-agent + +There is a program called SSHKeychain for Mac OS X which makes using ssh-agent even easier. It not only automatically starts ssh-agent and sets the appropriate environment variables for you but can also save your keys' passwords in your Keychain. Unfortunately it does not currently (as of version 0.7.2) support the types of keys that monotone exports. This can be fixed with a simple patch: + Index: Libs/SSHKey.m + =================================================================== + --- Libs/SSHKey.m (revision 96) + +++ Libs/SSHKey.m (working copy) + @@ -20,6 +20,8 @@ + return SSH_KEYTYPE_RSA1; + else if ([[lines objectAtIndex:0] isEqualToString:@"-----BEGIN RSA PRIVATE KEY-----"]) + return SSH_KEYTYPE_RSA; + + else if ([[lines objectAtIndex:0] isEqualToString:@"-----BEGIN ENCRYPTED PRIVATE KEY-----"]) + + return SSH_KEYTYPE_RSA; + else if ([[lines objectAtIndex:0] isEqualToString:@"-----BEGIN DSA PRIVATE KEY-----"]) + return SSH_KEYTYPE_DSA; + else ============================================================ --- local.css 480590a474c45b6e3ed552d2f8d21261a1d099be +++ local.css eb16bc79ab3be8ec820254aa523274cd2970f2bc @@ -144,10 +144,13 @@ code { } #content p { - font-size: 1.4em; line-height: 1.5em; } +#content > p { + font-size: 1.4em; +} + #content ul, #content ol { font-size: 1.3em; @@ -193,12 +196,15 @@ code { padding: 0.5em; margin-left: 3em; margin-right: 3em; - font-size: 1.4em; white-space: -moz-pre-wrap; white-space: pre-wrap; overflow: auto; } +#content > pre { + font-size: 1.4em; +} + #content blockquote { border: 1px dotted #474747; padding: 1em; @@ -227,11 +233,15 @@ code { #content table th, #content table td { - font-size: 1.3em; border: 1px solid #474747; padding: 0.3em; } +#content > table th, +#content > table td { + font-size: 1.3em; +} + #content table th { background: #E0E0E0; } ============================================================ --- wiki/AutomateWishlist.mdwn 7bd986c014f1219ccb5022bbd56f7f3fa07823f8 +++ wiki/AutomateWishlist.mdwn 7167097769b1caea9f204f18175053611330840d @@ -10,10 +10,10 @@ Missing (but useful) functions for the a * `roots`: returns all revision ids without parent. [**Implemented**] - * `get_file_length ID`: returns the size of the specified file. + * `get_file_length ID`: returns the size of the specified file. Currently one has to fetch the whole file in order to find out its length. NB: monotone does not actually store this information; the implementation inside monotone would just involve fetching the whole file and then reading its length. So this is a little dubious; it would encourage people to do things that are actually _slower_ (like fetching both the length and the full file, instead of just calculating the length themselves) when trying to optimize. Fetching a single file twice in a row through 'automate stdio' is pretty cheap, though. - + * `get_file ID OFFSET LEN`: returns only partial file contents. Might be handy for reading big files piecewise. NB: monotone does not actually have this capability internally (because sqlite does not have this capability internally); each time you requested a chunk, monotone would have to read the whole file into memory and then just deliver the requested part. Of course, it would then keep the file in its cache, so requesting multiple chunks in a row would be reasonably cheap... but you still have the whole file in memory. Let's leave things that "might be handy" until there's a real program that needs the capability :-). @@ -26,12 +26,15 @@ Given the 'automate branches' example,th -- [[People/MarcelvanderBoom]] [[DateTime(2006-06-18T18:39:06Z)]] Given the 'automate branches' example,the whole of the document linked to above, and the growing number of commands both in automate variety and in the normal interface (mtn heads for example), my wish would be that the 'normal' interface and the 'automate' interface become one; said another way: "get rid of the mtn automate command". Using a cmdline switch or a format specifier the output produced and the specifics of the effect can be steered. What 'callable from an automate stdio connection' means to a user: 'nothing'. - + To me: -{{{ mtn heads}}} + + $ mtn heads + and -{{{ mtn automate heads }}} + $ mtn automate heads + are the same thing, just formatted differently and as such i tend to look for an **option** to specify these formattings, not another **command**. Having something formatted as a plain rev list or basic io stanzas could be options to select quickly as they are used frequently. The document at berlios more or less says "give me all normal mtn commands, just in the automate interface" Why not do this in general? -- [[People/ThomasKeller]] [[DateTime(2006-08-28T10:33:00Z)]] @@ -51,54 +54,54 @@ The current form has the following descr The current form has the following description: The output consists of one or more packets for each command. A packet looks like: - + :::: - + is a decimal number specifying which command this output is from. It is 0 for the first command, and increases by one each time. - + is 0 for success, 1 for a syntax error, and 2 for any other error. - + is 'l' if this is the last piece of output for this command, and 'm' if there is more output to come. - + is the number of bytes in the output. - + is a piece of the output of the command. - - All but the last packet for a given command will have the field set to 'm'. + All but the last packet for a given command will have the field set to 'm'. + It is proposed to change it as so: The output consists of one or more packets for each command. A packet looks like: - + :::: - + is a decimal number specifying which command this output is from. It is 0 for the first command, and increases by one each time. - + is 0 for no-error, 1 for a syntax error, and 2 for any other error. no-error on the final 'l' packet (see below) for a command indicates success of the command; on earlier packets it means no error yet. Once an error has been detected and indicated with a packet with non-zero error value, no later packet should go back to 0. - + is an identifier for which output stream this packet represents, allowing multiple streams to be multiplexed over the channel. The following streams are presently defined; more streams may be added later. - + 'm' and 'l': the 'm' stream represents the normal stdout automate output of the command, formatted as described in the description for that command. The special 'l' value is described below. - - 'e': the 'e' stream represents any (unstructured) error message data. Internally, this maps to calls to the E() and N() print macros that would normally be written by the command to the program's stderr stream, if the automate sub-command had been called directly rather than via **stdio**. - - 'w': the 'w' stream represents any (unstructured) warning message data. Internally, this maps to calls to the W() print macro that would normally be written by the command to the program's stderr stream, if the automate sub-command had been called directly rather than via **stdio**. - - 'p': the 'e' stream represents any (unstructured) progress message data. Internally, this maps to calls to the P() print macro that would normally be written by the command to the program's stderr stream, if the automate sub-command had been called directly rather than via **stdio**. - + + 'e': the 'e' stream represents any (unstructured) error message data. Internally, this maps to calls to the E() and N() print macros that would normally be written by the command to the program's stderr stream, if the automate sub-command had been called directly rather than via **stdio**. + + 'w': the 'w' stream represents any (unstructured) warning message data. Internally, this maps to calls to the W() print macro that would normally be written by the command to the program's stderr stream, if the automate sub-command had been called directly rather than via **stdio**. + + 'p': the 'e' stream represents any (unstructured) progress message data. Internally, this maps to calls to the P() print macro that would normally be written by the command to the program's stderr stream, if the automate sub-command had been called directly rather than via **stdio**. + As needed, some of these (e,w,p) messages may be replaced with structured and well-defined error information for more direct interpretation by a gui frontend, not localised, on a different stream. - + 'p': informative progress messages from the command during execution. - + 't': ticker updates, as may be used by the gui to update a progress bar. The ticker stream is formatted as a series of lines, one for each ticker update. Each line contains :[/]. The is the ticker name, is the value, the optional is the max value (which may be used for percentage in a progress bar). - + is the number of bytes in the output. - + is a piece of the output of the command. - - The last packet for a given command will have the field set to 'l'. This packet indicates termination of all streams for the command. Any content in this packet is considered to be part of the 'm' stream. The in this packet is likely to be zero if there has been an error message that has prevented or interrupted normal output. - + + The last packet for a given command will have the field set to 'l'. This packet indicates termination of all streams for the command. Any content in this packet is considered to be part of the 'm' stream. The in this packet is likely to be zero if there has been an error message that has prevented or interrupted normal output. + If a client implementation gets a record for a stream type it does not recognise, the record should be ignored. - + The multiple stream encoding allows the output of errors and warnings to be associated with the command that generated them, allows the communication path to always stay in sync, and offers the opportunity to add other stream types for other useful purposes in the future as needs arise. ============================================================ --- wiki/Building/Windows/VisualC8.mdwn 1a834e5b700ecd717828d5f016a6d7f834ce38f5 +++ wiki/Building/Windows/VisualC8.mdwn c14fc83752478ffb9a1778980aa6a00a37dcd7bc @@ -1,19 +1,21 @@ -[[!tag migration-auto]] +[[!tag migration-done]] # Installing the toolchain -This section is preliminary setup--once this has been completed once, you can +This section is preliminary setup - once this has been completed, you can rebuild monotone regularly using only the instructions in the next section. -||Package||Version||URL|| -||Visual C++ 2005 Express Edition||8.0||http://msdn.microsoft.com/vstudio/express/visualc/download/|| -||Windows Server 2003 R2 Platform SDK||3/14/2006||http://www.microsoft.com/downloads/details.aspx?familyid=0BAF2B35-C656-4969-ACE8-E4C0C0716ADB&displaylang=en|| -||Boost||1.33.1||http://prdownloads.sf.net/boost/boost_1_33_1.tar.bz2?download|| -||iconv||1.9.2||http://gnuwin32.sourceforge.net/downlinks/libiconv.php|| -||zlib||1.2.3||http://gnuwin32.sourceforge.net/downlinks/zlib.php|| +[[!table data=""" +Package|Version|URL +Visual C++ 2005 Express Edition|8.0| +Windows Server 2003 R2 Platform SDK|3/14/2006| +Boost|1.33.1| +iconv|1.9.2| +zlib|1.2.3| +"""]] - * *Newer versions of the tools listed above are likely to work without too much trouble.* +*Newer versions of the tools listed above are likely to work without too much trouble.* ### Installation instructions @@ -35,9 +37,7 @@ section. 1. Install a pre-built monotone binary from http://venge.net/monotone/downloads/ 1. Follow the self-hosting instructions at http://venge.net/monotone/self-hosting.html to get a copy of the monotone repository. - 1. {{{ -$ monotone -d /path/to/monotone.db -b net.venge.monotone co monotone -}}} + 1. `monotone -d /path/to/monotone.db -b net.venge.monotone co monotone` 1. In Visual C++ 2005 Express Edition, open monotone/visualc/monotone.sln 1. Select either the 'debug' or 'release' target from the toolbar dropdown. 1. Review the C++ include path and Linker include path to ensure the paths to [[GnuWin32]] (for iconv and zlib) and Boost are correct. ============================================================ --- wiki/CreatingBranches.mdwn 10d4b5e74de668bfecf8c3957ad1810e89b5d38e +++ wiki/CreatingBranches.mdwn d96d1c605a39b99b5edbe90179aad04d70ede818 @@ -1,13 +1,13 @@ -[[!tag migration-auto]] +[[!tag migration-done]] -If you want to create a new branch, the most intuitive way (that I've found) is to commit your changes with --branch= . +If you want to create a new branch, the most intuitive way (that I've found) is to commit your changes with `--branch=`. -Alternatively, you may want to create a branch on an existing codebase before you have any changes to commit. To create a branch from the current workspace's revision, use the `mtn cert` e.g. {{{ -mtn cert h: branch com.yoyodine.bunchy.testing -}}} +Alternatively, you may want to create a branch on an existing codebase before you have any changes to commit. To create a branch from the current workspace's revision, use the `mtn cert` e.g. + $ mtn cert h: branch com.yoyodine.bunchy.testing + You can then update to the branch: - mtn update --branch com.yoyodine.bunchy.testing + $ mtn update --branch com.yoyodine.bunchy.testing Advantages of this approach are that multiple developers can update and begin working and committing to that branch straight away without having to coordinate their first commit between them. Also, I'm very forgetful, and I find that I often accidentally commit to the head - it's easy to forget that you intended to start a new branch, so making this the first step in your workflow prevents this happening. ============================================================ --- wiki/MonotoneOnDebian.mdwn 3be4b421940bc6a645e02932875d9fc1caef8c5e +++ wiki/MonotoneOnDebian.mdwn df31c0ce9573084063ea3230cf17df3ac0ed8fd5 @@ -1,8 +1,9 @@ -[[!tag migration-auto]] +[[!tag migration-done]] # Using Monotone on Debian Monotone packages can currently be found in the Debian repositories. Monotone changes very rapidly and versions in sarge and etch may be slightly dated. It is recommend that you use the monotone package from the monotone [website](http://venge.net/monotone) or the version that is in sid/unstable which is generally kept up to date. + apt-get install monotone # Running a Monotone Server on Debian @@ -13,21 +14,22 @@ The sarge monotone packages are currentl The sarge monotone packages are currently built with [pbuilder](http://www.netfort.gr.jp/~dancer/software/pbuilder-doc/pbuilder-doc.html) running on debian testing. Pbuilder basically extracts a minimal debian system (of a chosen distro) to a temporary directory, chroots there, installs neccessary build-deps, and then builds the given package. `pbuilder create` is used to create a base image (tarball). Note that `pdebuild` is easiest to run as root, due to the chrooting. -Since sarge has too old a version of boost, 1.33 is built (using pbuilder) for sarge then linked to monotone statically. [''I can't find the my build directory for boost, there probably weren't many caveats. I may have had to disable ICU support? - [[People/MattJohnston]]'']. The sarge boost .debs can then be placed in a directory such as `/var/cache/pbuilder/localpackages/`. Create a script `/var/cache/pbuilder/hooks/D70results` with the contents {{{ -#!/bin/sh -cd /var/cache/pbuilder/localpackages/ -/usr/bin/dpkg-scanpackages . /dev/null > /var/cache/pbuilder/localpackages/Packages -echo "deb file:/var/cache/pbuilder/localpackages ./" >> /etc/apt/sources.list -/usr/bin/apt-get update -}}} -and to `/etc/pbuilderrc` add the line below, so that those packages are found within the pbuilder instance. {{{ -HOOKDIR="/var/cache/pbuilder/hooks" -}}} +Since sarge has too old a version of boost, 1.33 is built (using pbuilder) for sarge then linked to monotone statically. (I can't find the my build directory for boost, there probably weren't many caveats. I may have had to disable ICU support? - [[People/MattJohnston]]). The sarge boost .debs can then be placed in a directory such as `/var/cache/pbuilder/localpackages/`. Create a script `/var/cache/pbuilder/hooks/D70results` with the contents -To build the actual monotone package, untar the release tarball or perform a fresh checkout and neccessary auto* incantations (a clean checkout is required, otherwise `pdebuild` will try to tar up old .o files etc and it will be very slow). Edit `debian/changelog` and change the version to something like `0.32-sarge0.1`. At the top of `debian/rules`, add a line {{{ -DEB_CONFIGURE_USER_FLAGS=--enable-static-boost -}}} + #!/bin/sh + cd /var/cache/pbuilder/localpackages/ + /usr/bin/dpkg-scanpackages . /dev/null > /var/cache/pbuilder/localpackages/Packages + echo "deb file:/var/cache/pbuilder/localpackages ./" >> /etc/apt/sources.list + /usr/bin/apt-get update +and to `/etc/pbuilderrc` add the line below, so that those packages are found within the pbuilder instance. + + HOOKDIR="/var/cache/pbuilder/hooks" + +To build the actual monotone package, untar the release tarball or perform a fresh checkout and neccessary auto* incantations (a clean checkout is required, otherwise `pdebuild` will try to tar up old .o files etc and it will be very slow). Edit `debian/changelog` and change the version to something like `0.32-sarge0.1`. At the top of `debian/rules`, add a line + + DEB_CONFIGURE_USER_FLAGS=--enable-static-boost + The sarge version of dpkg-buildpackage doesn't like the `${source:Version}` substitution in `debian/control` so replace it with the now-deprecated `${Source-Version}`. The build process can be run in the source directory (as root) simply with `pdebuild`. The output .debs should end up in `/var/cache/pbuilder/result`. The can then be installed and tested with a standalone testsuite. Run `ldd /usr/bin/mtn` to check that only one libstdc++ has been included. ============================================================ --- wiki/NotesOnTestingChangesetify.mdwn 360c2233b395292730f095ad1de8403287d1329d +++ wiki/NotesOnTestingChangesetify.mdwn d9dea40bdb9affb4865aa5357739363245e4fe7f @@ -1,4 +1,4 @@ -[[!tag migration-auto]] +[[!tag migration-done]] The procedure for migration from pre-0.16 databases ("changesetify") is currently not tested by the automatic testsuite. This is partially because there's pretty good odds that there are no such databases remaining in use (so it doesn't matter if the code works) and partially because generating the necessary database dumps is not easy. @@ -6,125 +6,129 @@ I (Zack) recently attempted to construct 1. The appropriate version to use to construct a test case is monotone 0.14. Versions 0.15 and 0.16 came out in the *middle* of the changeset transition, and have interesting bugs that we don't want to deal with. Unfortunately, the code at `t:monotone-0.14` requires patches to compile with gcc 4.1 and boost 1.33. The patches are at the bottom of this page. 1. The code for constructing a test repository (from `tests/schema_migration`) uses mtn features that did not exist in 0.14: - * The `attr` commands. It is necessary to translate these to manipulations of `.mt-attr`, which is how attributes used to be done. (Note that `schema_migration_with_rosterify` does not attempt to test attribute migration - which is worrisome, as that is a major part of the roster transition.) - * The `--message-file` option is not present, and putting the message on the command line does not appear to work. + * The `attr` commands. It is necessary to translate these to manipulations of `.mt-attr`, which is how attributes used to be done. (Note that `schema_migration_with_rosterify` does not attempt to test attribute migration - which is worrisome, as that is a major part of the roster transition.) + * The `--message-file` option is not present, and putting the message on the command line does not appear to work. * It is not clear whether the `--date` option works. It is accepted for `commit`, but not `propagate`. * monotone 0.14 wants `[pubkey]` and `[privkey]` packets instead of `[keypair]` packets. I do not know how to convert the latter to the former. I snarfed the key out of 0.14's `testsuite.at` instead, but I'm not sure it's the same one. 1. There might be an outright bug in the code to generate test databases: `testfile2` is added, then we revert to a revision in which it didn't exist, *modify it*, and check in another revision without adding it again. Is that intentional? - 1. If you bull your way past all of that, you get a database, which you can dump with the old version, reload with the new one, and run through `db migrate`. Then you get this from `db changesetify`: {{{mtn: rebuilding revision graph from manifest certs + 1. If you bull your way past all of that, you get a database, which you can dump with the old version, reload with the new one, and run through `db migrate`. Then you get this from `db changesetify`: + +
+mtn: rebuilding revision graph from manifest certs
 mtn: certs in | certs out | nodes | revs out
 mtn:       28 |         0 |     5 |        0
 mtn: scanning for bogus merge edges
-mtn: fatal: std::logic_error: ../S-vanilla/revision.cc:861: invariant 'fetching nonexistent entry from node_to_old_rev' violated}}}  I will make the debugging log and/or the database dump available if anyone asks.
+mtn: fatal: std::logic_error: ../S-vanilla/revision.cc:861: invariant 'fetching nonexistent entry from node_to_old_rev' violated
+
----- +I will make the debugging log and/or the database dump available if anyone asks. ## patches for 0.14 -{{{#!cplusplus -============================================================ ---- cryptopp/integer.cpp f2a0e049b9aef571c5807afd972a77f377482d8f -+++ cryptopp/integer.cpp e2cf6746ad5ce51d0bd2406ec07f9bcda36f5aa9 -@@ -1473,7 +1473,7 @@ void [[PentiumOptimized]]::Square4(word* Y, + #!cplusplus + ============================================================ + --- cryptopp/integer.cpp f2a0e049b9aef571c5807afd972a77f377482d8f + +++ cryptopp/integer.cpp e2cf6746ad5ce51d0bd2406ec07f9bcda36f5aa9 + @@ -1473,7 +1473,7 @@ void [[PentiumOptimized]]::Square4(word* Y, - : - : "D" (Y), "S" (X) -- : "eax", "ecx", "edx", "ebp", "memory" -+ : "eax", "ecx", "edx", "memory" - ); - } + : + : "D" (Y), "S" (X) + - : "eax", "ecx", "edx", "ebp", "memory" + + : "eax", "ecx", "edx", "memory" + ); + } -============================================================ ---- cryptopp/pubkey.h e3fbc0074a9c736ed86f4e4003de49f2bdf89c96 -+++ cryptopp/pubkey.h 0328c2a5276ff0173a6eb0a44d5493fd943d2b87 -@@ -38,6 +38,8 @@ - #include "fips140.h" - #include "argnames.h" - #include "modarith.h" -+#include "asn.h" -+ - #include + ============================================================ + --- cryptopp/pubkey.h e3fbc0074a9c736ed86f4e4003de49f2bdf89c96 + +++ cryptopp/pubkey.h 0328c2a5276ff0173a6eb0a44d5493fd943d2b87 + @@ -38,6 +38,8 @@ + #include "fips140.h" + #include "argnames.h" + #include "modarith.h" + +#include "asn.h" + + + #include - // VC60 workaround: this macro is defined in shlobj.h and conflicts with a template parameter used in this file -@@ -745,8 +747,6 @@ void DL_[[PublicKey]]::[[AssignFrom]](const N - } - } + // VC60 workaround: this macro is defined in shlobj.h and conflicts with a template parameter used in this file + @@ -745,8 +747,6 @@ void DL_[[PublicKey]]::[[AssignFrom]](const N + } + } --class OID; -- - //! . - template - class DL_[[KeyImpl]] : public PK -============================================================ ---- merkle_tree.cc 565e0f4242011220ff3b20574cff68d74552ee2a -+++ merkle_tree.cc 2abde1a4a10709bec5ad40e325ce4016b1fcd0e1 -@@ -144,7 +144,7 @@ merkle_node::extended_prefix(size_t slot + -class OID; + - + //! . + template + class DL_[[KeyImpl]] : public PK + ============================================================ + --- merkle_tree.cc 565e0f4242011220ff3b20574cff68d74552ee2a + +++ merkle_tree.cc 2abde1a4a10709bec5ad40e325ce4016b1fcd0e1 + @@ -144,7 +144,7 @@ merkle_node::extended_prefix(size_t slot - void - merkle_node::extended_prefix(size_t slot, -- dynamic_bitset & extended) const -+ dynamic_bitset & extended) const - { - // remember, in a dynamic_bitset, bit size()-1 is most significant - check_invariants(); -@@ -158,7 +158,7 @@ merkle_node::extended_raw_prefix(size_t - merkle_node::extended_raw_prefix(size_t slot, - prefix & extended) const - { -- dynamic_bitset ext; -+ dynamic_bitset ext; - extended_prefix(slot, ext); - ostringstream oss; - to_block_range(ext, ostream_iterator(oss)); -@@ -363,7 +363,7 @@ pick_slot_and_prefix_for_value(id const - pick_slot_and_prefix_for_value(id const & val, - size_t level, - size_t & slotnum, -- dynamic_bitset & pref) -+ dynamic_bitset & pref) - { - pref.resize(val().size() * 8); - from_block_range(val().begin(), val().end(), pref); -@@ -401,7 +401,7 @@ insert_into_merkle_tree(app_state & app, - encode_hexenc(leaf, hleaf); + void + merkle_node::extended_prefix(size_t slot, + - dynamic_bitset & extended) const + + dynamic_bitset & extended) const + { + // remember, in a dynamic_bitset, bit size()-1 is most significant + check_invariants(); + @@ -158,7 +158,7 @@ merkle_node::extended_raw_prefix(size_t + merkle_node::extended_raw_prefix(size_t slot, + prefix & extended) const + { + - dynamic_bitset ext; + + dynamic_bitset ext; + extended_prefix(slot, ext); + ostringstream oss; + to_block_range(ext, ostream_iterator(oss)); + @@ -363,7 +363,7 @@ pick_slot_and_prefix_for_value(id const + pick_slot_and_prefix_for_value(id const & val, + size_t level, + size_t & slotnum, + - dynamic_bitset & pref) + + dynamic_bitset & pref) + { + pref.resize(val().size() * 8); + from_block_range(val().begin(), val().end(), pref); + @@ -401,7 +401,7 @@ insert_into_merkle_tree(app_state & app, + encode_hexenc(leaf, hleaf); - size_t slotnum; -- dynamic_bitset pref; -+ dynamic_bitset pref; - pick_slot_and_prefix_for_value(leaf, level, slotnum, pref); + size_t slotnum; + - dynamic_bitset pref; + + dynamic_bitset pref; + pick_slot_and_prefix_for_value(leaf, level, slotnum, pref); - ostringstream oss; -============================================================ ---- merkle_tree.hh 8adb43063411673b9fd8ca07410e53a6e2d308f3 -+++ merkle_tree.hh 635f951bdc8a6d045c8b063afd228508944f7513 -@@ -55,9 +55,9 @@ struct merkle_node - struct merkle_node - { - size_t level; -- boost::dynamic_bitset pref; -+ boost::dynamic_bitset pref; - size_t total_num_leaves; -- boost::dynamic_bitset bitmap; -+ boost::dynamic_bitset bitmap; - std::vector slots; - netcmd_item_type type; + ostringstream oss; + ============================================================ + --- merkle_tree.hh 8adb43063411673b9fd8ca07410e53a6e2d308f3 + +++ merkle_tree.hh 635f951bdc8a6d045c8b063afd228508944f7513 + @@ -55,9 +55,9 @@ struct merkle_node + struct merkle_node + { + size_t level; + - boost::dynamic_bitset pref; + + boost::dynamic_bitset pref; + size_t total_num_leaves; + - boost::dynamic_bitset bitmap; + + boost::dynamic_bitset bitmap; + std::vector slots; + netcmd_item_type type; -@@ -74,7 +74,7 @@ struct merkle_node - void set_raw_slot(size_t slot, id const & val); - void set_hex_slot(size_t slot, hexenc const & val); + @@ -74,7 +74,7 @@ struct merkle_node + void set_raw_slot(size_t slot, id const & val); + void set_hex_slot(size_t slot, hexenc const & val); -- void extended_prefix(size_t slot, boost::dynamic_bitset & extended) const; -+ void extended_prefix(size_t slot, boost::dynamic_bitset & extended) const; - void extended_raw_prefix(size_t slot, prefix & extended) const; - void extended_hex_prefix(size_t slot, hexenc & extended) const; + - void extended_prefix(size_t slot, boost::dynamic_bitset & extended) const; + + void extended_prefix(size_t slot, boost::dynamic_bitset & extended) const; + void extended_raw_prefix(size_t slot, prefix & extended) const; + void extended_hex_prefix(size_t slot, hexenc & extended) const; -@@ -106,7 +106,7 @@ void pick_slot_and_prefix_for_value(id c - merkle_node const & node); + @@ -106,7 +106,7 @@ void pick_slot_and_prefix_for_value(id c + merkle_node const & node); - void pick_slot_and_prefix_for_value(id const & val, size_t level, -- size_t & slotnum, boost::dynamic_bitset & pref); -+ size_t & slotnum, boost::dynamic_bitset & pref); + void pick_slot_and_prefix_for_value(id const & val, size_t level, + - size_t & slotnum, boost::dynamic_bitset & pref); + + size_t & slotnum, boost::dynamic_bitset & pref); + // this inserts a leaf into the appropriate position in a merkle + // tree, writing it to the db and updating any other nodes in the + - // this inserts a leaf into the appropriate position in a merkle - // tree, writing it to the db and updating any other nodes in the -}}} ============================================================ --- wiki/SelfHostingInfo.mdwn 7507ffb9e3c9e2e679e1a5ec0dc81d93fdf567e9 +++ wiki/SelfHostingInfo.mdwn 0d894fd19e7042bbd95b23b7fc86aa76aa847932 @@ -1,7 +1,5 @@ -[[!tag migration-auto]] +[[!tag migration-done]] -#acl Known:read,write All:read - # Self Hosting Monotone development is self-hosting. This means that once you have a copy of monotone, you can use it to track the development of monotone from your own machine, rather than using CVS. @@ -12,10 +10,12 @@ I have set up a netsync server (a copy o 1. Get or build a copy of the most recent monotone release. 1. Initialize a database, which is just a regular file you'll store versions and certificates into. - {{{$ mtn --db=mtn.db db init -}}} +
+$ mtn --db=mtn.db db init
+  
1. Run this command, you should get something similar to these results: - {{{$ mtn --db=mtn.db pull monotone.ca "net.venge.monotone*" +
+$ mtn --db=mtn.db pull monotone.ca "net.venge.monotone*"
 mtn: setting default server to monotone.ca
 mtn: setting default branch include pattern to 'net.venge.monotone*'
 mtn: setting default branch exclude pattern to ''
@@ -30,69 +30,47 @@ mtn:   86.4 M |       528 | 50445/50445 
 mtn:   36.1 k |       480 |        0 |       0
 mtn: bytes in | bytes out |    certs in |     revs in
 mtn:   86.4 M |       528 | 50445/50445 | 12557/12557
-mtn: successful exchange with monotone.ca}}}
+mtn: successful exchange with monotone.ca
+  
Note the key fingerprint in that output; you may wish to verify that it really is `3e6f5225bc2fffacbc20c9de37ff2dae1e20892e`. /!\ In case monotone.ca is down, you could try one of these alternative servers that sync to each other regularly: - ||**server**||**fingerprint**|| - ||monotone.mtn-host.prjek.net||`a52f85615cb2445989f525bf17a603250381a751`|| - ||204.152.190.23||`fee080c8906fc3a9a601587807df0a5088a3fdd8`|| + [[!table data=""" + server|fingerprint + monotone.mtn-host.prjek.net|`a52f85615cb2445989f525bf17a603250381a751` + 204.152.190.23|`fee080c8906fc3a9a601587807df0a5088a3fdd8`"""]] This is your initial pull so it will take a bit of time, as it has to transfer a few megabytes of history to you. Subsequent pulls will be much faster. When you're done pulling you can take a look at the heads of the branch you picked up. You should get something like this (though with a different head version, different author, etc.): - {{{$ mtn --db=mtn.db --branch=net.venge.monotone heads +
+$ mtn --db=mtn.db --branch=net.venge.monotone heads
 mtn: branch 'net.venge.monotone' is currently merged:
-d947ac9f47d3c3e61af60822cbf0491ae69b2bef address@hidden 2006-08-14 T12:29:35}}}
+d947ac9f47d3c3e61af60822cbf0491ae69b2bef address@hidden 2006-08-14 T12:29:35
+  
You can now look at the certs on a particular version; we will use the version tagged as monotone-0.28: - {{{$ mtn --db=mtn.db ls certs t:monotone-0.28 +
+    $ mtn --db=mtn.db ls certs t:monotone-0.28
+    mtn: expanding selection 't:monotone-0.28'
+    mtn: expanded to '8c6ce7cb2ccd21290b435e042c2be4554ec6a048'
+    ...
+  
+ And you can also check out that version: +
+$ mtn --db=mtn.db checkout -r t:monotone-0.28 monotone
 mtn: expanding selection 't:monotone-0.28'
 mtn: expanded to '8c6ce7cb2ccd21290b435e042c2be4554ec6a048'
---------------------------------------------------------------------------------
+  
-Key : address@hidden -Sig : ok -Name : author -Value : address@hidden --------------------------------------------------------------------------------- - -Key : address@hidden -Sig : ok -Name : branch -Value : net.venge.monotone --------------------------------------------------------------------------------- - -Key : address@hidden -Sig : ok -Name : changelog -Value : 2006-07-22 Nathaniel Smith - : - : * NEWS: Set date, and it turns out AUTOMATE() was there in - : 0.27... --------------------------------------------------------------------------------- - -Key : address@hidden -Sig : ok -Name : date -Value : 2006-07-22T08:43:33 --------------------------------------------------------------------------------- - -Key : address@hidden -Sig : ok -Name : tag -Value : monotone-0.28}}} - - And you can also check out that version: - {{{$ mtn --db=mtn.db checkout -r t:monotone-0.28 monotone -mtn: expanding selection 't:monotone-0.28' -mtn: expanded to '8c6ce7cb2ccd21290b435e042c2be4554ec6a048'}}} 1. That's it, you're done! You will now find yourself with a checked out working copy in the directory monotone, which you can edit, merge, commit, etc. In the future, you can pull new versions from my server and update your working copy from your database using this pair of commands: - {{{$ cd monotone ... +
+$ cd monotone ...
 $ mtn pull ...
-$ mtn update ...}}}
+$ mtn update ...
+  
## Making your own server -Setting up your own server is covered in the "Network Service" section of the documentation. Once you have your own server running, if you want me to fetch changes directly from it (and merge them with my monotone versions) you should send an email with your server's host name, collection name, and public key, to the mailing list. +Setting up your own server is covered in the [Network Service Revisted](http://monotone.ca/monotone.html#Network-Service-Revisited) section of the documentation. Once you have your own server running, if you want me to fetch changes directly from it (and merge them with my monotone versions) you should send an email with your server's host name, collection name, and public key, to the mailing list. If you have difficulty setting up a server, or are feeling lazy and would like us to host one for you, send an email to the list. For projects of a reasonable size, or if you just want to play around, we'd be happy to host an extra server for you. ============================================================ --- wiki/TestIntro.mdwn 3f8c1eea5f36618fb4a32cb423680b125dd05d18 +++ wiki/TestIntro.mdwn 2ae4b8f2d7ab0d6c9ee613f7ef0eac362bac2a77 @@ -1,10 +1,10 @@ -[[!tag migration-auto]] +[[!tag migration-done]] Put newbie info about lua tests here. Please write your observations about not obvious things when playing with lua tests. -## How to run lua tests +## How to run lua tests One must build `tester` make tester # on unix @@ -13,7 +13,7 @@ To run the tests invoke following comman To run the tests invoke following command: -{{{./tester ./tester lua-testsuite.lua [tests ...]}}} + ./tester ./tester lua-testsuite.lua [tests ...] Testsuite invoked without any test names invoke all tests. @@ -21,9 +21,12 @@ This is basic unittesting stuff. Interes This is basic unittesting stuff. Interesting functions: +[[!table data=""" +function name | description +`check( condition: boolean)` | fails when condition is not met +`qgrep( pattern: string, filename: string) ` | returns true if pattern is found in given file +`check( mtn(args ...), expected_return_value,catch_stdout, catch_stderr)` | idiom: invoke monotone with given args, check return status and possibly remember its standard and diagnostic output +`get(filename) ` | read contents of file and return string +`mkdir(name) ` | create directory +`addfile(name, contents) ` | idiom for writefile(name, contents) , mtn add name +"""]] -|| `check( condition: boolean)` || fails when condition is not met || -|| `qgrep( pattern: string, filename: string) ` || returns true if pattern is found in given file || -|| `check( mtn(args ...), expected_return_value,catch_stdout, catch_stderr)` || idiom: invoke monotone with given args, check return status and possibly remember its standard and diagnostic output || -|| `get(filename) ` || read contents of file and return string || -|| `mkdir(name) ` || create directory || -|| `addfile(name, contents) ` || idiom for writefile(name, contents) , mtn add name || ============================================================ --- wiki/VersionedPolicy/Graydon.mdwn 22266864c4af8611753ebb7eccf7ecd9bab95a2a +++ wiki/VersionedPolicy/Graydon.mdwn d5882fd1b6f323eb964183a8503ea8d65955c7e8 @@ -1,37 +1,35 @@ -[[!tag migration-auto]] +[[!tag migration-done]] # Current Design -We have a structure called a policy. Policies look like this -(if you happen to be an ML programmer): {{{ +We have a structure called a policy. Policies look like this +(if you happen to be an ML programmer): - type policy = { id: branch_id; + type policy = { id: branch_id; killed: bool; defined_users: (string, user) map; meta: (string, string) map; delegation: delegation; } - and delegation = [[SimpleDelegate]] of branch_policy + and delegation = [[SimpleDelegate]] of branch_policy | [[FullDelegate]] of { content_branch: branch_policy option; subpolicy_branches: (string, branch_policy) map; } - and branch_policy = { id: branch_id; + and branch_policy = { id: branch_id; permitted_users: string set; status: branch_status; meta: (string,string) map; } - and branch_status = Active | Dormant + and branch_status = Active | Dormant - and branch_id = Root + and branch_id = Root | [[ChildOf]](branch_id, nonce) // hashed to compress to constant size - and user = { pk: key_id; - meta: (string,string) map; } + and user = { pk: key_id; + meta: (string,string) map; } -}}} - Policies are stored in branches, and all branches in monotone become fully hierarchical: @@ -46,12 +44,12 @@ decisions that name and constrain the su same human-friendly name as the policy branch: for example, `foo.bar` may name both a particular content branch 'and' contain policy decisions that name and constrain the sub-branches `foo.bar.baz` and -{{{foo.bar.quux}}}. +`foo.bar.quux`. Aspects of policy accumulate "down" the tree, from the root policy -towards content trees. So a branch like `foo.bar.baz` is judged by -the policy of `foo`, plus the policy of `foo.bar`, plus the policy of -{{{foo.bar.baz}}}. +towards content trees. So a branch like `foo.bar.baz` is judged by +the policy of `foo`, plus the policy of `foo.bar`, plus the policy of +`foo.bar.baz`. At the root of the policy-branch tree there is a branch that all projects everywhere refer to as their parent, called `Root`. Users @@ -65,17 +63,17 @@ Why does monotone trust the user's key w the contents of a user's Root branch is generally private. Why does monotone trust the user's key when signing policy in the `Root` -policy branch? Because there is an external, non-versioned list of -keys that each user keeps (typically a 1-entry list) that defines +policy branch? Because there is an external, non-versioned list of +keys that each user keeps (typically a 1-entry list) that defines which keys to trust for the `Root` branch. The important design point is that the decision for "which policy node represents the head of a policy branch" is made by evaluating 'certs' on that policy branch, and the validity of those certs is determined by looking in the policy you have in the 'parent' branch. So branch `foo` -says which keys are legal for manipulating policy on branch `foo.bar` -(and its sub-branches), and `foo.bar` introduces the sub-branch -{{{foo.bar.baz}}} more keys which are legal for manipulating policy on +says which keys are legal for manipulating policy on branch `foo.bar` +(and its sub-branches), and `foo.bar` introduces the sub-branch +`foo.bar.baz` more keys which are legal for manipulating policy on branch `foo.bar.baz`. If there are multiple policy heads, one is chosen at random and used; @@ -94,10 +92,10 @@ There is also a per-policy-branch "kill up). There is also a per-policy-branch "kill switch" that is sticky; once -the kill-switch on a branch is set it can never be un-set, so it +the kill-switch on a branch is set it can never be un-set, so it represents a way for an admin to permanently retire a problematic branch, for example if the admin's key is compromised. We also include a form of delegation that exists purely for inserting extra levels of authorization and kill-switches, without introducing new branch +name components. This is a simple delegate. -name components. This is a [[SimpleDelegate]].