gnunet-svn
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[GNUnet-SVN] [gnurl] 189/219: docs: Markdown and misc improvements [ci s


From: gnunet
Subject: [GNUnet-SVN] [gnurl] 189/219: docs: Markdown and misc improvements [ci skip]
Date: Wed, 22 May 2019 19:18:48 +0200

This is an automated email from the git hooks/post-receive script.

ng0 pushed a commit to branch master
in repository gnurl.

commit f3e0f071b14fcb46a453f69bdf4e062bcaacf362
Author: Viktor Szakats <address@hidden>
AuthorDate: Thu May 16 22:11:27 2019 +0000

    docs: Markdown and misc improvements [ci skip]
    
    Approved-by: Daniel Stenberg
    Closes #3896
---
 docs/CIPHERS.md           |  10 +-
 docs/CODE_STYLE.md        |   6 +-
 docs/INSTALL.md           |  54 +++----
 docs/INTERNALS.md         | 388 +++++++++++++++++++++++-----------------------
 docs/RELEASE-PROCEDURE.md |   2 +-
 docs/SSL-PROBLEMS.md      |  10 +-
 6 files changed, 240 insertions(+), 230 deletions(-)

diff --git a/docs/CIPHERS.md b/docs/CIPHERS.md
index c01180426..0b7ccebf9 100644
--- a/docs/CIPHERS.md
+++ b/docs/CIPHERS.md
@@ -271,7 +271,8 @@ When specifying multiple cipher names, separate them with 
colon (`:`).
 
 ## GSKit
 
-Ciphers are internally defined as numeric codes 
(https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_73/apis/gsk_attribute_set_buffer.htm),
+Ciphers are internally defined as
+[numeric 
codes](https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_73/apis/gsk_attribute_set_buffer.htm),
 but libcurl maps them to the following case-insensitive names.
 
 ### SSL2 cipher suites (insecure: disabled by default)
@@ -446,9 +447,12 @@ but libcurl maps them to the following case-insensitive 
names.
 `DHE-PSK-CHACHA20-POLY1305`,
 `EDH-RSA-DES-CBC3-SHA`,
 
-## WinSSL
+## Schannel
 
-WinSSL allows the enabling and disabling of encryption algorithms, but not 
specific ciphersuites. They are defined by Microsoft 
(https://msdn.microsoft.com/en-us/library/windows/desktop/aa375549(v=vs.85).aspx)
+Schannel allows the enabling and disabling of encryption algorithms, but not
+specific ciphersuites. They are
+[defined](https://docs.microsoft.com/windows/desktop/SecCrypto/alg-id) by
+Microsoft.
 
 `CALG_MD2`,
 `CALG_MD4`,
diff --git a/docs/CODE_STYLE.md b/docs/CODE_STYLE.md
index 2d275cd7d..0ceb5b9ad 100644
--- a/docs/CODE_STYLE.md
+++ b/docs/CODE_STYLE.md
@@ -9,8 +9,8 @@ style is more important than individual contributors having 
their own personal
 tastes satisfied.
 
 Our C code has a few style rules. Most of them are verified and upheld by the
-"lib/checksrc.pl" script. Invoked with "make checksrc" or even by default by
-the build system when built after "./configure --enable-debug" has been used.
+`lib/checksrc.pl` script. Invoked with `make checksrc` or even by default by
+the build system when built after `./configure --enable-debug` has been used.
 
 It is normally not a problem for anyone to follow the guidelines, as you just
 need to copy the style already used in the source code and there are no
@@ -227,7 +227,7 @@ Align with the "current open" parenthesis:
 Use **#ifdef HAVE_FEATURE** to do conditional code. We avoid checking for
 particular operating systems or hardware in the #ifdef lines. The HAVE_FEATURE
 shall be generated by the configure script for unix-like systems and they are
-hard-coded in the config-[system].h files for the others.
+hard-coded in the `config-[system].h` files for the others.
 
 We also encourage use of macros/functions that possibly are empty or defined
 to constants when libcurl is built without that feature, to make the code
diff --git a/docs/INSTALL.md b/docs/INSTALL.md
index e1a0a3cf9..d287d55e4 100644
--- a/docs/INSTALL.md
+++ b/docs/INSTALL.md
@@ -64,7 +64,7 @@ have OpenSSL installed in your system, you can run configure 
like this:
    ./configure --without-ssl
 
 If you have OpenSSL installed, but with the libraries in one place and the
-header files somewhere else, you have to set the LDFLAGS and CPPFLAGS
+header files somewhere else, you have to set the `LDFLAGS` and `CPPFLAGS`
 environment variables prior to running configure.  Something like this should
 work:
 
@@ -121,9 +121,9 @@ libressl.
  KB140584 is a must for any Windows developer. Especially important is full
  understanding if you are not going to follow the advice given above.
 
- - [How To Use the C Run-Time](https://support.microsoft.com/kb/94248/en-us)
- - [Run-Time Library Compiler 
Options](https://docs.microsoft.com/en-us/cpp/build/reference/md-mt-ld-use-run-time-library)
- - [Potential Errors Passing CRT Objects Across DLL 
Boundaries](https://msdn.microsoft.com/en-us/library/ms235460)
+ - [How To Use the C 
Run-Time](https://support.microsoft.com/help/94248/how-to-use-the-c-run-time)
+ - [Run-Time Library Compiler 
Options](https://docs.microsoft.com/cpp/build/reference/md-mt-ld-use-run-time-library)
+ - [Potential Errors Passing CRT Objects Across DLL 
Boundaries](https://docs.microsoft.com/cpp/c-runtime-library/potential-errors-passing-crt-objects-across-dll-boundaries)
 
 If your app is misbehaving in some strange way, or it is suffering from
 memory corruption, before asking for further help, please try first to
@@ -148,7 +148,7 @@ make targets available to build libcurl with more features, 
use:
    and SSPI support.
 
 If you have any problems linking libraries or finding header files, be sure
-to verify that the provided "Makefile.m32" files use the proper paths, and
+to verify that the provided `Makefile.m32` files use the proper paths, and
 adjust as necessary. It is also possible to override these paths with
 environment variables, for example:
 
@@ -172,8 +172,8 @@ If you want to enable LDAPS support then set LDAPS=1.
 ## Cygwin
 
 Almost identical to the unix installation. Run the configure script in the
-curl source tree root with `sh configure`. Make sure you have the sh
-executable in /bin/ or you'll see the configure fail toward the end.
+curl source tree root with `sh configure`. Make sure you have the `sh`
+executable in `/bin/` or you'll see the configure fail toward the end.
 
 Run `make`
 
@@ -200,9 +200,9 @@ protocols:
 
 If you want to set any of these defines you have the following options:
 
- - Modify lib/config-win32.h
- - Modify lib/curl_setup.h
- - Modify winbuild/Makefile.vc
+ - Modify `lib/config-win32.h`
+ - Modify `lib/curl_setup.h`
+ - Modify `winbuild/Makefile.vc`
  - Modify the "Preprocessor Definitions" in the libcurl project
 
 Note: The pre-processor settings can be found using the Visual Studio IDE
@@ -213,12 +213,12 @@ versions.
 ## Using BSD-style lwIP instead of Winsock TCP/IP stack in Win32 builds
 
 In order to compile libcurl and curl using BSD-style lwIP TCP/IP stack it is
-necessary to make definition of preprocessor symbol USE_LWIPSOCK visible to
+necessary to make definition of preprocessor symbol `USE_LWIPSOCK` visible to
 libcurl and curl compilation processes. To set this definition you have the
 following alternatives:
 
- - Modify lib/config-win32.h and src/config-win32.h
- - Modify winbuild/Makefile.vc
+ - Modify `lib/config-win32.h` and `src/config-win32.h`
+ - Modify `winbuild/Makefile.vc`
  - Modify the "Preprocessor Definitions" in the libcurl project
 
 Note: The pre-processor settings can be found using the Visual Studio IDE
@@ -248,13 +248,13 @@ look for dynamic import symbols.
 
 ## Legacy Windows and SSL
 
-WinSSL (specifically Schannel from Windows SSPI), is the native SSL library in
-Windows. However, WinSSL in Windows <= XP is unable to connect to servers that
+Schannel (from Windows SSPI), is the native SSL library in Windows. However,
+Schannel in Windows <= XP is unable to connect to servers that
 no longer support the legacy handshakes and algorithms used by those
 versions. If you will be using curl in one of those earlier versions of
 Windows you should choose another SSL backend such as OpenSSL.
 
-# Apple iOS and Mac OS X
+# Apple iOS and macOS
 
 On modern Apple operating systems, curl can be built to use Apple's SSL/TLS
 implementation, Secure Transport, instead of OpenSSL. To build with Secure
@@ -269,12 +269,12 @@ the server. This, of course, includes the root 
certificates that ship with the
 OS. The `--cert` and `--engine` options, and their libcurl equivalents, are
 currently unimplemented in curl with Secure Transport.
 
-For OS X users: In OS X 10.8 ("Mountain Lion"), Apple made a major overhaul to
-the Secure Transport API that, among other things, added support for the newer
-TLS 1.1 and 1.2 protocols. To get curl to support TLS 1.1 and 1.2, you must
-build curl on Mountain Lion or later, or by using the equivalent SDK. If you
-set the `MACOSX_DEPLOYMENT_TARGET` environmental variable to an earlier
-version of OS X prior to building curl, then curl will use the new Secure
+For macOS users: In OS X 10.8 ("Mountain Lion"), Apple made a major overhaul
+to the Secure Transport API that, among other things, added support for the
+newer TLS 1.1 and 1.2 protocols. To get curl to support TLS 1.1 and 1.2, you
+must build curl on Mountain Lion or later, or by using the equivalent SDK. If
+you set the `MACOSX_DEPLOYMENT_TARGET` environmental variable to an earlier
+version of macOS prior to building curl, then curl will use the new Secure
 Transport API on Mountain Lion and later, and fall back on the older API when
 the same curl binary is executed on older cats. For example, running these
 commands in curl's directory in the shell will build the code such that it
@@ -288,7 +288,7 @@ will run on cats as old as OS X 10.6 ("Snow Leopard") 
(using bash):
 
 Download and unpack the curl package.
 
-'cd' to the new directory. (e.g. `cd curl-7.12.3`)
+`cd` to the new directory. (e.g. `cd curl-7.12.3`)
 
 Set environment variables to point to the cross-compile toolchain and call
 configure with any options you need.  Be sure and specify the `--host` and
@@ -327,7 +327,7 @@ In some cases, you may be able to simplify the above 
commands to as little as:
 
 There are a number of configure options that can be used to reduce the size of
 libcurl for embedded applications where binary size is an important factor.
-First, be sure to set the CFLAGS variable when configuring with any relevant
+First, be sure to set the `CFLAGS` variable when configuring with any relevant
 compiler optimization flags to reduce the size of the binary.  For gcc, this
 would mean at minimum the -Os option, and potentially the `-march=X`,
 `-mdynamic-no-pic` and `-flto` options as well, e.g.
@@ -360,8 +360,8 @@ use, here are some other flags that can reduce the size of 
the library:
 
 The GNU compiler and linker have a number of options that can reduce the
 size of the libcurl dynamic libraries on some platforms even further.
-Specify them by providing appropriate CFLAGS and LDFLAGS variables on the
-configure command-line, e.g.
+Specify them by providing appropriate `CFLAGS` and `LDFLAGS` variables on
+the configure command-line, e.g.
 
     CFLAGS="-Os -ffunction-sections -fdata-sections
             -fno-unwind-tables -fno-asynchronous-unwind-tables -flto"
@@ -383,7 +383,7 @@ in a lower total size than dynamically linking.
 Note that the curl test harness can detect the use of some, but not all, of
 the `--disable` statements suggested above. Use will cause tests relying on
 those features to fail.  The test harness can be manually forced to skip the
-relevant tests by specifying certain key words on the runtests.pl command
+relevant tests by specifying certain key words on the `runtests.pl` command
 line.  Following is a list of appropriate key words:
 
  - `--disable-cookies`          !cookies
diff --git a/docs/INTERNALS.md b/docs/INTERNALS.md
index 2120216db..1563ec516 100644
--- a/docs/INTERNALS.md
+++ b/docs/INTERNALS.md
@@ -34,7 +34,7 @@ curl internals
  - [`curl_off_t`](#curl_off_t)
  - [curlx](#curlx)
  - [Content Encoding](#contentencoding)
- - [hostip.c explained](#hostip)
+ - [`hostip.c` explained](#hostip)
  - [Track Down Memory Leaks](#memoryleak)
  - [`multi_socket`](#multi_socket)
  - [Structs in libcurl](#structs)
@@ -73,7 +73,7 @@ git
 Portability
 ===========
 
- We write curl and libcurl to compile with C89 compilers.  On 32bit and up
+ We write curl and libcurl to compile with C89 compilers.  On 32-bit and up
  machines. Most of libcurl assumes more or less POSIX compliance but that's
  not a requirement.
 
@@ -125,7 +125,7 @@ Build tools
  - GNU M4       1.4
  - perl         5.004
  - roffit       0.5
- - groff        ? (any version that supports "groff -Tps -man [in] [out]")
+ - groff        ? (any version that supports `groff -Tps -man [in] [out]`)
  - ps2pdf (gs)  ?
 
 <a name="winvsunix"></a>
@@ -139,7 +139,7 @@ Windows vs Unix
 
    In curl, this is solved with defines and macros, so that the source looks
    the same in all places except for the header file that defines them. The
-   macros in use are sclose(), sread() and swrite().
+   macros in use are `sclose()`, `sread()` and `swrite()`.
 
  2. Windows requires a couple of init calls for the socket stuff.
 
@@ -178,14 +178,14 @@ Library
  There are plenty of entry points to the library, namely each publicly defined
  function that libcurl offers to applications. All of those functions are
  rather small and easy-to-follow. All the ones prefixed with `curl_easy` are
- put in the lib/easy.c file.
+ put in the `lib/easy.c` file.
 
  `curl_global_init()` and `curl_global_cleanup()` should be called by the
  application to initialize and clean up global stuff in the library. As of
  today, it can handle the global SSL initing if SSL is enabled and it can init
  the socket layer on windows machines. libcurl itself has no "global" scope.
 
- All printf()-style functions use the supplied clones in lib/mprintf.c. This
+ All printf()-style functions use the supplied clones in `lib/mprintf.c`. This
  makes sure we stay absolutely platform independent.
 
  [ `curl_easy_init()`][2] allocates an internal struct and makes some
@@ -204,8 +204,8 @@ Library
  `curl_multi_wait()`, and `curl_multi_perform()` until the transfer is done
  and then returns.
 
- Some of the most important key functions in url.c are called from multi.c
- when certain key steps are to be made in the transfer operation.
+ Some of the most important key functions in `url.c` are called from
+ `multi.c` when certain key steps are to be made in the transfer operation.
 
 <a name="Curl_connect"></a>
 Curl_connect()
@@ -213,15 +213,15 @@ Curl_connect()
 
    Analyzes the URL, it separates the different components and connects to the
    remote host. This may involve using a proxy and/or using SSL. The
-   `Curl_resolv()` function in lib/hostip.c is used for looking up host names
-   (it does then use the proper underlying method, which may vary between
-   platforms and builds).
+   `Curl_resolv()` function in `lib/hostip.c` is used for looking up host
+   names (it does then use the proper underlying method, which may vary
+   between platforms and builds).
 
    When `Curl_connect` is done, we are connected to the remote site. Then it
    is time to tell the server to get a document/file. `Curl_do()` arranges
    this.
 
-   This function makes sure there's an allocated and initiated 'connectdata'
+   This function makes sure there's an allocated and initiated `connectdata`
    struct that is used for this particular connection only (although there may
    be several requests performed on the same connect). A bunch of things are
    inited/inherited from the `Curl_easy` struct.
@@ -230,15 +230,15 @@ Curl_connect()
 multi_do()
 ---------
 
-   `multi_do()` makes sure the proper protocol-specific function is called. The
-   functions are named after the protocols they handle.
+   `multi_do()` makes sure the proper protocol-specific function is called.
+   The functions are named after the protocols they handle.
 
    The protocol-specific functions of course deal with protocol-specific
    negotiations and setup. They have access to the `Curl_sendf()` (from
-   lib/sendf.c) function to send printf-style formatted data to the remote
+   `lib/sendf.c`) function to send printf-style formatted data to the remote
    host and when they're ready to make the actual file transfer they call the
-   `Curl_setup_transfer()` function (in lib/transfer.c) to setup the transfer
-   and returns.
+   `Curl_setup_transfer()` function (in `lib/transfer.c`) to setup the
+   transfer and returns.
 
    If this DO function fails and the connection is being re-used, libcurl will
    then close this connection, setup a new connection and re-issue the DO
@@ -252,9 +252,9 @@ Curl_readwrite()
 
    Called during the transfer of the actual protocol payload.
 
-   During transfer, the progress functions in lib/progress.c are called at
+   During transfer, the progress functions in `lib/progress.c` are called at
    frequent intervals (or at the user's choice, a specified callback might get
-   called). The speedcheck functions in lib/speedcheck.c are also used to
+   called). The speedcheck functions in `lib/speedcheck.c` are also used to
    verify that the transfer is as fast as required.
 
 <a name="multi_done"></a>
@@ -286,11 +286,12 @@ HTTP(S)
 =======
 
  HTTP offers a lot and is the protocol in curl that uses the most lines of
- code. There is a special file (lib/formdata.c) that offers all the multipart
- post functions.
+ code. There is a special file `lib/formdata.c` that offers all the
+ multipart post functions.
 
- base64-functions for user+password stuff (and more) is in (lib/base64.c) and
- all functions for parsing and sending cookies are found in (lib/cookie.c).
+ base64-functions for user+password stuff (and more) is in `lib/base64.c`
+ and all functions for parsing and sending cookies are found in
+ `lib/cookie.c`.
 
  HTTPS uses in almost every case the same procedure as HTTP, with only two
  exceptions: the connect procedure is different and the function used to read
@@ -312,18 +313,18 @@ FTP
 ===
 
  The `Curl_if2ip()` function can be used for getting the IP number of a
- specified network interface, and it resides in lib/if2ip.c.
+ specified network interface, and it resides in `lib/if2ip.c`.
 
  `Curl_ftpsendf()` is used for sending FTP commands to the remote server. It
  was made a separate function to prevent us programmers from forgetting that
- they must be CRLF terminated. They must also be sent in one single write() to
- make firewalls and similar happy.
+ they must be CRLF terminated. They must also be sent in one single `write()`
+ to make firewalls and similar happy.
 
 <a name="kerberos"></a>
 Kerberos
 ========
 
- Kerberos support is mainly in lib/krb5.c and lib/security.c but also
+ Kerberos support is mainly in `lib/krb5.c` and `lib/security.c` but also
  `curl_sasl_sspi.c` and `curl_sasl_gssapi.c` for the email protocols and
  `socks_gssapi.c` and `socks_sspi.c` for SOCKS5 proxy specifics.
 
@@ -331,55 +332,57 @@ Kerberos
 TELNET
 ======
 
- Telnet is implemented in lib/telnet.c.
+ Telnet is implemented in `lib/telnet.c`.
 
 <a name="file"></a>
 FILE
 ====
 
- The file:// protocol is dealt with in lib/file.c.
+ The `file://` protocol is dealt with in `lib/file.c`.
 
 <a name="smb"></a>
 SMB
 ===
 
- The smb:// protocol is dealt with in lib/smb.c.
+ The `smb://` protocol is dealt with in `lib/smb.c`.
 
 <a name="ldap"></a>
 LDAP
 ====
 
- Everything LDAP is in lib/ldap.c and lib/openldap.c
+ Everything LDAP is in `lib/ldap.c` and `lib/openldap.c`.
 
 <a name="email"></a>
 E-mail
 ======
 
- The e-mail related source code is in lib/imap.c, lib/pop3.c and lib/smtp.c.
+ The e-mail related source code is in `lib/imap.c`, `lib/pop3.c` and
+ `lib/smtp.c`.
 
 <a name="general"></a>
 General
 =======
 
  URL encoding and decoding, called escaping and unescaping in the source code,
- is found in lib/escape.c.
+ is found in `lib/escape.c`.
 
- While transferring data in Transfer() a few functions might get used.
- `curl_getdate()` in lib/parsedate.c is for HTTP date comparisons (and more).
+ While transferring data in `Transfer()` a few functions might get used.
+ `curl_getdate()` in `lib/parsedate.c` is for HTTP date comparisons (and
+ more).
 
- lib/getenv.c offers `curl_getenv()` which is for reading environment
+ `lib/getenv.c` offers `curl_getenv()` which is for reading environment
  variables in a neat platform independent way. That's used in the client, but
- also in lib/url.c when checking the proxy environment variables. Note that
- contrary to the normal unix getenv(), this returns an allocated buffer that
- must be free()ed after use.
+ also in `lib/url.c` when checking the proxy environment variables. Note that
+ contrary to the normal unix `getenv()`, this returns an allocated buffer that
+ must be `free()`ed after use.
 
- lib/netrc.c holds the .netrc parser
+ `lib/netrc.c` holds the `.netrc` parser.
 
- lib/timeval.c features replacement functions for systems that don't have
- gettimeofday() and a few support functions for timeval conversions.
+ `lib/timeval.c` features replacement functions for systems that don't have
+ `gettimeofday()` and a few support functions for timeval conversions.
 
  A function named `curl_version()` that returns the full curl version string
- is found in lib/version.c.
+ is found in `lib/version.c`.
 
 <a name="persistent"></a>
 Persistent Connections
@@ -393,7 +396,7 @@ Persistent Connections
    as well as all the options etc that the library-user may choose.
 
  - The `Curl_easy` struct holds the "connection cache" (an array of
-   pointers to 'connectdata' structs).
+   pointers to `connectdata` structs).
 
  - This enables the 'curl handle' to be reused on subsequent transfers.
 
@@ -441,10 +444,10 @@ SSL libraries
  in future libcurl versions.
 
  To deal with this internally in the best way possible, we have a generic SSL
- function API as provided by the vtls/vtls.[ch] system, and they are the only
+ function API as provided by the `vtls/vtls.[ch]` system, and they are the only
  SSL functions we must use from within libcurl. vtls is then crafted to use
  the appropriate lower-level function calls to whatever SSL library that is in
- use. For example vtls/openssl.[ch] for the OpenSSL library.
+ use. For example `vtls/openssl.[ch]` for the OpenSSL library.
 
 <a name="symbols"></a>
 Library Symbols
@@ -463,7 +466,7 @@ Return Codes and Informationals
 
  I've made things simple. Almost every function in libcurl returns a CURLcode,
  that must be `CURLE_OK` if everything is OK or otherwise a suitable error
- code as the curl/curl.h include file defines. The very spot that detects an
+ code as the `curl/curl.h` include file defines. The very spot that detects an
  error must use the `Curl_failf()` function to set the human-readable error
  description.
 
@@ -485,20 +488,20 @@ API/ABI
 Client
 ======
 
- main() resides in `src/tool_main.c`.
+ `main()` resides in `src/tool_main.c`.
 
- `src/tool_hugehelp.c` is automatically generated by the mkhelp.pl perl script
- to display the complete "manual" and the `src/tool_urlglob.c` file holds the
- functions used for the URL-"globbing" support. Globbing in the sense that the
- {} and [] expansion stuff is there.
+ `src/tool_hugehelp.c` is automatically generated by the `mkhelp.pl` perl
+ script to display the complete "manual" and the `src/tool_urlglob.c` file
+ holds the functions used for the URL-"globbing" support. Globbing in the
+ sense that the `{}` and `[]` expansion stuff is there.
 
- The client mostly sets up its 'config' struct properly, then
+ The client mostly sets up its `config` struct properly, then
  it calls the `curl_easy_*()` functions of the library and when it gets back
  control after the `curl_easy_perform()` it cleans up the library, checks
  status and exits.
 
- When the operation is done, the ourWriteOut() function in src/writeout.c may
- be called to report about the operation. That function is using the
+ When the operation is done, the `ourWriteOut()` function in `src/writeout.c`
+ may be called to report about the operation. That function is using the
  `curl_easy_getinfo()` function to extract useful information from the curl
  session.
 
@@ -509,30 +512,32 @@ Client
 Memory Debugging
 ================
 
- The file lib/memdebug.c contains debug-versions of a few functions. Functions
- such as malloc, free, fopen, fclose, etc that somehow deal with resources
- that might give us problems if we "leak" them. The functions in the memdebug
- system do nothing fancy, they do their normal function and then log
- information about what they just did. The logged data can then be analyzed
- after a complete session,
+ The file `lib/memdebug.c` contains debug-versions of a few functions.
+ Functions such as `malloc()`, `free()`, `fopen()`, `fclose()`, etc that
+ somehow deal with resources that might give us problems if we "leak" them.
+ The functions in the memdebug system do nothing fancy, they do their normal
+ function and then log information about what they just did. The logged data
+ can then be analyzed after a complete session,
 
- memanalyze.pl is the perl script present in tests/ that analyzes a log file
- generated by the memory tracking system. It detects if resources are
+ `memanalyze.pl` is the perl script present in `tests/` that analyzes a log
+ file generated by the memory tracking system. It detects if resources are
  allocated but never freed and other kinds of errors related to resource
  management.
 
- Internally, definition of preprocessor symbol DEBUGBUILD restricts code which
- is only compiled for debug enabled builds. And symbol CURLDEBUG is used to
- differentiate code which is _only_ used for memory tracking/debugging.
+ Internally, definition of preprocessor symbol `DEBUGBUILD` restricts code
+ which is only compiled for debug enabled builds. And symbol `CURLDEBUG` is
+ used to differentiate code which is _only_ used for memory
+ tracking/debugging.
 
- Use -DCURLDEBUG when compiling to enable memory debugging, this is also
- switched on by running configure with --enable-curldebug. Use -DDEBUGBUILD
- when compiling to enable a debug build or run configure with --enable-debug.
+ Use `-DCURLDEBUG` when compiling to enable memory debugging, this is also
+ switched on by running configure with `--enable-curldebug`. Use
+ `-DDEBUGBUILD` when compiling to enable a debug build or run configure with
+ `--enable-debug`.
 
- curl --version will list 'Debug' feature for debug enabled builds, and
+ `curl --version` will list 'Debug' feature for debug enabled builds, and
  will list 'TrackMemory' feature for curl debug memory tracking capable
  builds. These features are independent and can be controlled when running
- the configure script. When --enable-debug is given both features will be
+ the configure script. When `--enable-debug` is given both features will be
  enabled, unless some restriction prevents memory tracking from being used.
 
 <a name="test"></a>
@@ -543,12 +548,12 @@ Test Suite
  curl archive tree, and it contains a bunch of scripts and a lot of test case
  data.
 
- The main test script is runtests.pl that will invoke test servers like
- httpserver.pl and ftpserver.pl before all the test cases are performed. The
- test suite currently only runs on Unix-like platforms.
+ The main test script is `runtests.pl` that will invoke test servers like
+ `httpserver.pl` and `ftpserver.pl` before all the test cases are performed.
+ The test suite currently only runs on Unix-like platforms.
 
- You'll find a description of the test suite in the tests/README file, and the
- test case data files in the tests/FILEFORMAT file.
+ You'll find a description of the test suite in the `tests/README` file, and
+ the test case data files in the `tests/FILEFORMAT` file.
 
  The test suite automatically detects if curl was built with the memory
  debugging enabled, and if it was, it will detect memory leaks, too.
@@ -576,7 +581,7 @@ Asynchronous name resolves
  prevent linking errors later on). Then I simply build the areslib project
  (the other projects adig/ahost seem to fail under MSVC).
 
- Next was libcurl. I opened lib/config-win32.h and I added a:
+ Next was libcurl. I opened `lib/config-win32.h` and I added a:
  `#define USE_ARES 1`
 
  Next thing I did was I added the path for the ares includes to the include
@@ -585,8 +590,8 @@ Asynchronous name resolves
  Lastly, I also changed libcurl to be single-threaded rather than
  multi-threaded, again this was to prevent some duplicate symbol errors. I'm
  not sure why I needed to change everything to single-threaded, but when I
- didn't I got redefinition errors for several CRT functions (malloc, stricmp,
- etc.)
+ didn't I got redefinition errors for several CRT functions (`malloc()`,
+ `stricmp()`, etc.)
 
 <a name="curl_off_t"></a>
 `curl_off_t`
@@ -594,7 +599,7 @@ Asynchronous name resolves
 
  `curl_off_t` is a data type provided by the external libcurl include
  headers. It is the type meant to be used for the [`curl_easy_setopt()`][1]
- options that end with LARGE. The type is 64bit large on most modern
+ options that end with LARGE. The type is 64-bit large on most modern
  platforms.
 
 <a name="curlx"></a>
@@ -607,15 +612,15 @@ curlx
  additional functions.
 
  We provide them through a single header file for easy access for apps:
- "curlx.h"
+ `curlx.h`
 
 `curlx_strtoofft()`
 -------------------
    A macro that converts a string containing a number to a `curl_off_t` number.
    This might use the `curlx_strtoll()` function which is provided as source
    code in strtoofft.c. Note that the function is only provided if no
-   strtoll() (or equivalent) function exist on your platform. If `curl_off_t`
-   is only a 32 bit number on your platform, this macro uses strtol().
+   `strtoll()` (or equivalent) function exist on your platform. If `curl_off_t`
+   is only a 32-bit number on your platform, this macro uses `strtol()`.
 
 Future
 ------
@@ -649,27 +654,28 @@ Content Encoding
  [HTTP/1.1][4] specifies that a client may request that a server encode its
  response. This is usually used to compress a response using one (or more)
  encodings from a set of commonly available compression techniques. These
- schemes include 'deflate' (the zlib algorithm), 'gzip' 'br' (brotli) and
- 'compress'. A client requests that the server perform an encoding by including
- an Accept-Encoding header in the request document. The value of the header
- should be one of the recognized tokens 'deflate', ... (there's a way to
+ schemes include `deflate` (the zlib algorithm), `gzip`, `br` (brotli) and
+ `compress`. A client requests that the server perform an encoding by including
+ an `Accept-Encoding` header in the request document. The value of the header
+ should be one of the recognized tokens `deflate`, ... (there's a way to
  register new schemes/tokens, see sec 3.5 of the spec). A server MAY honor
  the client's encoding request. When a response is encoded, the server
- includes a Content-Encoding header in the response. The value of the
- Content-Encoding header indicates which encodings were used to encode the
+ includes a `Content-Encoding` header in the response. The value of the
+ `Content-Encoding` header indicates which encodings were used to encode the
  data, in the order in which they were applied.
 
  It's also possible for a client to attach priorities to different schemes so
  that the server knows which it prefers. See sec 14.3 of RFC 2616 for more
- information on the Accept-Encoding header. See sec [3.1.2.2 of RFC 7231][15]
- for more information on the Content-Encoding header.
+ information on the `Accept-Encoding` header. See sec
+ [3.1.2.2 of RFC 7231][15] for more information on the `Content-Encoding`
+ header.
 
 ## Supported content encodings
 
- The 'deflate', 'gzip' and 'br' content encodings are supported by libcurl.
+ The `deflate`, `gzip` and `br` content encodings are supported by libcurl.
  Both regular and chunked transfers work fine.  The zlib library is required
- for the 'deflate' and 'gzip' encodings, while the brotli decoding library is
- for the 'br' encoding.
+ for the `deflate` and `gzip` encodings, while the brotli decoding library is
+ for the `br` encoding.
 
 ## The libcurl interface
 
@@ -677,45 +683,45 @@ Content Encoding
 
   [`curl_easy_setopt`][1](curl, [`CURLOPT_ACCEPT_ENCODING`][5], string)
 
- where string is the intended value of the Accept-Encoding header.
+ where string is the intended value of the `Accept-Encoding` header.
 
  Currently, libcurl does support multiple encodings but only
- understands how to process responses that use the "deflate", "gzip" and/or
- "br" content encodings, so the only values for [`CURLOPT_ACCEPT_ENCODING`][5]
- that will work (besides "identity," which does nothing) are "deflate",
- "gzip" and "br". If a response is encoded using the "compress" or methods,
+ understands how to process responses that use the `deflate`, `gzip` and/or
+ `br` content encodings, so the only values for [`CURLOPT_ACCEPT_ENCODING`][5]
+ that will work (besides `identity`, which does nothing) are `deflate`,
+ `gzip` and `br`. If a response is encoded using the `compress` or methods,
  libcurl will return an error indicating that the response could
- not be decoded.  If `<string>` is NULL no Accept-Encoding header is generated.
- If `<string>` is a zero-length string, then an Accept-Encoding header
- containing all supported encodings will be generated.
+ not be decoded.  If `<string>` is NULL no `Accept-Encoding` header is
+ generated. If `<string>` is a zero-length string, then an `Accept-Encoding`
+ header containing all supported encodings will be generated.
 
  The [`CURLOPT_ACCEPT_ENCODING`][5] must be set to any non-NULL value for
  content to be automatically decoded.  If it is not set and the server still
  sends encoded content (despite not having been asked), the data is returned
- in its raw form and the Content-Encoding type is not checked.
+ in its raw form and the `Content-Encoding` type is not checked.
 
 ## The curl interface
 
- Use the [--compressed][6] option with curl to cause it to ask servers to
+ Use the [`--compressed`][6] option with curl to cause it to ask servers to
  compress responses using any format supported by curl.
 
 <a name="hostip"></a>
-hostip.c explained
-==================
+`hostip.c` explained
+====================
 
- The main compile-time defines to keep in mind when reading the host*.c source
- file are these:
+ The main compile-time defines to keep in mind when reading the `host*.c`
+ source file are these:
 
 ## `CURLRES_IPV6`
 
- this host has getaddrinfo() and family, and thus we use that. The host may
+ this host has `getaddrinfo()` and family, and thus we use that. The host may
  not be able to resolve IPv6, but we don't really have to take that into
  account. Hosts that aren't IPv6-enabled have `CURLRES_IPV4` defined.
 
 ## `CURLRES_ARES`
 
  is defined if libcurl is built to use c-ares for asynchronous name
- resolves. This can be Windows or *nix.
+ resolves. This can be Windows or \*nix.
 
 ## `CURLRES_THREADED`
 
@@ -728,20 +734,20 @@ hostip.c explained
  libcurl is not built to use an asynchronous resolver, `CURLRES_SYNCH` is
  defined.
 
-## host*.c sources
+## `host*.c` sources
 
- The host*.c sources files are split up like this:
+ The `host*.c` sources files are split up like this:
 
- - hostip.c      - method-independent resolver functions and utility functions
- - hostasyn.c    - functions for asynchronous name resolves
- - hostsyn.c     - functions for synchronous name resolves
- - asyn-ares.c   - functions for asynchronous name resolves using c-ares
- - asyn-thread.c - functions for asynchronous name resolves using threads
- - hostip4.c     - IPv4 specific functions
- - hostip6.c     - IPv6 specific functions
+ - `hostip.c`      - method-independent resolver functions and utility 
functions
+ - `hostasyn.c`    - functions for asynchronous name resolves
+ - `hostsyn.c`     - functions for synchronous name resolves
+ - `asyn-ares.c`   - functions for asynchronous name resolves using c-ares
+ - `asyn-thread.c` - functions for asynchronous name resolves using threads
+ - `hostip4.c`     - IPv4 specific functions
+ - `hostip6.c`     - IPv6 specific functions
 
- The hostip.h is the single united header file for all this. It defines the
- `CURLRES_*` defines based on the config*.h and `curl_setup.h` defines.
+ The `hostip.h` is the single united header file for all this. It defines the
+ `CURLRES_*` defines based on the `config*.h` and `curl_setup.h` defines.
 
 <a name="memoryleak"></a>
 Track Down Memory Leaks
@@ -753,14 +759,13 @@ Track Down Memory Leaks
   than one thread. If you want/need to use it in a multi-threaded app. Please
   adjust accordingly.
 
-
 ## Build
 
-  Rebuild libcurl with -DCURLDEBUG (usually, rerunning configure with
-  --enable-debug fixes this). 'make clean' first, then 'make' so that all
+  Rebuild libcurl with `-DCURLDEBUG` (usually, rerunning configure with
+  `--enable-debug` fixes this). `make clean` first, then `make` so that all
   files are actually rebuilt properly. It will also make sense to build
-  libcurl with the debug option (usually -g to the compiler) so that debugging
-  it will be easier if you actually do find a leak in the library.
+  libcurl with the debug option (usually `-g` to the compiler) so that
+  debugging it will be easier if you actually do find a leak in the library.
 
   This will create a library that has memory debugging enabled.
 
@@ -784,7 +789,7 @@ Track Down Memory Leaks
 
 ## Analyze the Flow
 
-  Use the tests/memanalyze.pl perl script to analyze the dump file:
+  Use the `tests/memanalyze.pl` perl script to analyze the dump file:
 
     tests/memanalyze.pl dump
 
@@ -800,45 +805,46 @@ Track Down Memory Leaks
 
  Implementation of the `curl_multi_socket` API
 
-  The main ideas of this API are simply:
-
-   1 - The application can use whatever event system it likes as it gets info
-       from libcurl about what file descriptors libcurl waits for what action
-       on. (The previous API returns `fd_sets` which is very select()-centric).
-
-   2 - When the application discovers action on a single socket, it calls
-       libcurl and informs that there was action on this particular socket and
-       libcurl can then act on that socket/transfer only and not care about
-       any other transfers. (The previous API always had to scan through all
-       the existing transfers.)
-
-  The idea is that [`curl_multi_socket_action()`][7] calls a given callback
-  with information about what socket to wait for what action on, and the
-  callback only gets called if the status of that socket has changed.
-
-  We also added a timer callback that makes libcurl call the application when
-  the timeout value changes, and you set that with [`curl_multi_setopt()`][9]
-  and the [`CURLMOPT_TIMERFUNCTION`][10] option. To get this to work,
-  Internally, there's an added struct to each easy handle in which we store
-  an "expire time" (if any). The structs are then "splay sorted" so that we
-  can add and remove times from the linked list and yet somewhat swiftly
-  figure out both how long there is until the next nearest timer expires
-  and which timer (handle) we should take care of now. Of course, the upside
-  of all this is that we get a [`curl_multi_timeout()`][8] that should also
-  work with old-style applications that use [`curl_multi_perform()`][11].
-
-  We created an internal "socket to easy handles" hash table that given
-  a socket (file descriptor) returns the easy handle that waits for action on
-  that socket.  This hash is made using the already existing hash code
-  (previously only used for the DNS cache).
-
-  To make libcurl able to report plain sockets in the socket callback, we had
-  to re-organize the internals of the [`curl_multi_fdset()`][12] etc so that
-  the conversion from sockets to `fd_sets` for that function is only done in
-  the last step before the data is returned. I also had to extend c-ares to
-  get a function that can return plain sockets, as that library too returned
-  only `fd_sets` and that is no longer good enough. The changes done to c-ares
-  are available in c-ares 1.3.1 and later.
+ The main ideas of this API are simply:
+
+ 1. The application can use whatever event system it likes as it gets info
+    from libcurl about what file descriptors libcurl waits for what action
+    on. (The previous API returns `fd_sets` which is very
+    `select()`-centric).
+
+ 2. When the application discovers action on a single socket, it calls
+    libcurl and informs that there was action on this particular socket and
+    libcurl can then act on that socket/transfer only and not care about
+    any other transfers. (The previous API always had to scan through all
+    the existing transfers.)
+
+ The idea is that [`curl_multi_socket_action()`][7] calls a given callback
+ with information about what socket to wait for what action on, and the
+ callback only gets called if the status of that socket has changed.
+
+ We also added a timer callback that makes libcurl call the application when
+ the timeout value changes, and you set that with [`curl_multi_setopt()`][9]
+ and the [`CURLMOPT_TIMERFUNCTION`][10] option. To get this to work,
+ Internally, there's an added struct to each easy handle in which we store
+ an "expire time" (if any). The structs are then "splay sorted" so that we
+ can add and remove times from the linked list and yet somewhat swiftly
+ figure out both how long there is until the next nearest timer expires
+ and which timer (handle) we should take care of now. Of course, the upside
+ of all this is that we get a [`curl_multi_timeout()`][8] that should also
+ work with old-style applications that use [`curl_multi_perform()`][11].
+
+ We created an internal "socket to easy handles" hash table that given
+ a socket (file descriptor) returns the easy handle that waits for action on
+ that socket.  This hash is made using the already existing hash code
+ (previously only used for the DNS cache).
+
+ To make libcurl able to report plain sockets in the socket callback, we had
+ to re-organize the internals of the [`curl_multi_fdset()`][12] etc so that
+ the conversion from sockets to `fd_sets` for that function is only done in
+ the last step before the data is returned. I also had to extend c-ares to
+ get a function that can return plain sockets, as that library too returned
+ only `fd_sets` and that is no longer good enough. The changes done to c-ares
+ are available in c-ares 1.3.1 and later.
 
 <a name="structs"></a>
 Structs in libcurl
@@ -851,31 +857,31 @@ for older and later versions as things don't change 
drastically that often.
 ## Curl_easy
 
   The `Curl_easy` struct is the one returned to the outside in the external API
-  as a "CURL *". This is usually known as an easy handle in API documentations
+  as a `CURL *`. This is usually known as an easy handle in API documentations
   and examples.
 
   Information and state that is related to the actual connection is in the
-  'connectdata' struct. When a transfer is about to be made, libcurl will
+  `connectdata` struct. When a transfer is about to be made, libcurl will
   either create a new connection or re-use an existing one. The particular
   connectdata that is used by this handle is pointed out by
   `Curl_easy->easy_conn`.
 
   Data and information that regard this particular single transfer is put in
-  the SingleRequest sub-struct.
+  the `SingleRequest` sub-struct.
 
   When the `Curl_easy` struct is added to a multi handle, as it must be in
-  order to do any transfer, the ->multi member will point to the `Curl_multi`
-  struct it belongs to. The ->prev and ->next members will then be used by the
-  multi code to keep a linked list of `Curl_easy` structs that are added to
-  that same multi handle. libcurl always uses multi so ->multi *will* point to
-  a `Curl_multi` when a transfer is in progress.
+  order to do any transfer, the `->multi` member will point to the `Curl_multi`
+  struct it belongs to. The `->prev` and `->next` members will then be used by
+  the multi code to keep a linked list of `Curl_easy` structs that are added to
+  that same multi handle. libcurl always uses multi so `->multi` *will* point
+  to a `Curl_multi` when a transfer is in progress.
 
-  ->mstate is the multi state of this particular `Curl_easy`. When
+  `->mstate` is the multi state of this particular `Curl_easy`. When
   `multi_runsingle()` is called, it will act on this handle according to which
   state it is in. The mstate is also what tells which sockets to return for a
   specific `Curl_easy` when [`curl_multi_fdset()`][12] is called etc.
 
-  The libcurl source code generally use the name 'data' for the variable that
+  The libcurl source code generally use the name `data` for the variable that
   points to the `Curl_easy`.
 
   When doing multiplexed HTTP/2 transfers, each `Curl_easy` is associated with
@@ -890,16 +896,16 @@ for older and later versions as things don't change 
drastically that often.
   re-use an existing one instead of creating a new as it creates a significant
   performance boost.
 
-  Each 'connectdata' identifies a single physical connection to a server. If
+  Each `connectdata` identifies a single physical connection to a server. If
   the connection can't be kept alive, the connection will be closed after use
   and then this struct can be removed from the cache and freed.
 
   Thus, the same `Curl_easy` can be used multiple times and each time select
-  another connectdata struct to use for the connection. Keep this in mind, as
-  it is then important to consider if options or choices are based on the
+  another `connectdata` struct to use for the connection. Keep this in mind,
+  as it is then important to consider if options or choices are based on the
   connection or the `Curl_easy`.
 
-  Functions in libcurl will assume that connectdata->data points to the
+  Functions in libcurl will assume that `connectdata->data` points to the
   `Curl_easy` that uses this connection (for the moment).
 
   As a special complexity, some protocols supported by libcurl require a
@@ -914,7 +920,7 @@ for older and later versions as things don't change 
drastically that often.
   this single struct and thus can be considered a single connection for most
   internal concerns.
 
-  The libcurl source code generally use the name 'conn' for the variable that
+  The libcurl source code generally use the name `conn` for the variable that
   points to the connectdata.
 
 <a name="Curl_multi"></a>
@@ -923,7 +929,7 @@ for older and later versions as things don't change 
drastically that often.
   Internally, the easy interface is implemented as a wrapper around multi
   interface functions. This makes everything multi interface.
 
-  `Curl_multi` is the multi handle struct exposed as "CURLM *" in external
+  `Curl_multi` is the multi handle struct exposed as `CURLM *` in external
   APIs.
 
   This struct holds a list of `Curl_easy` structs that have been added to this
@@ -950,9 +956,9 @@ for older and later versions as things don't change 
drastically that often.
   `->conn_cache` points to the connection cache. It keeps track of all
   connections that are kept after use. The cache has a maximum size.
 
-  `->closure_handle` is described in the 'connectdata' section.
+  `->closure_handle` is described in the `connectdata` section.
 
-  The libcurl source code generally use the name 'multi' for the variable that
+  The libcurl source code generally use the name `multi` for the variable that
   points to the `Curl_multi` struct.
 
 <a name="Curl_handler"></a>
@@ -961,8 +967,8 @@ for older and later versions as things don't change 
drastically that often.
   Each unique protocol that is supported by libcurl needs to provide at least
   one `Curl_handler` struct. It defines what the protocol is called and what
   functions the main code should call to deal with protocol specific issues.
-  In general, there's a source file named [protocol].c in which there's a
-  "struct `Curl_handler` `Curl_handler_[protocol]`" declared. In url.c there's
+  In general, there's a source file named `[protocol].c` in which there's a
+  `struct Curl_handler Curl_handler_[protocol]` declared. In `url.c` there's
   then the main array with all individual `Curl_handler` structs pointed to
   from a single array which is scanned through when a URL is given to libcurl
   to work with.
@@ -974,9 +980,9 @@ for older and later versions as things don't change 
drastically that often.
   `->setup_connection` is called to allow the protocol code to allocate
   protocol specific data that then gets associated with that `Curl_easy` for
   the rest of this transfer. It gets freed again at the end of the transfer.
-  It will be called before the 'connectdata' for the transfer has been
+  It will be called before the `connectdata` for the transfer has been
   selected/created. Most protocols will allocate its private
-  'struct [PROTOCOL]' here and assign `Curl_easy->req.protop` to point to it.
+  `struct [PROTOCOL]` here and assign `Curl_easy->req.protop` to point to it.
 
   `->connect_it` allows a protocol to do some specific actions after the TCP
   connect is done, that can still be considered part of the connection phase.
@@ -1036,7 +1042,7 @@ for older and later versions as things don't change 
drastically that often.
     limit which "direction" of socket actions that the main engine will
     concern itself with.
 
-  - `PROTOPT_NONETWORK` - a protocol that doesn't use network (read file:)
+  - `PROTOPT_NONETWORK` - a protocol that doesn't use network (read `file:`)
 
   - `PROTOPT_NEEDSPWD` - this protocol needs a password and will use a default
     one unless one is provided
@@ -1055,7 +1061,7 @@ for older and later versions as things don't change 
drastically that often.
 ## Curl_share
 
   The libcurl share API allocates a `Curl_share` struct, exposed to the
-  external API as "CURLSH *".
+  external API as `CURLSH *`.
 
   The idea is that the struct can have a set of its own versions of caches and
   pools and then by providing this struct in the `CURLOPT_SHARE` option, those
@@ -1072,7 +1078,7 @@ for older and later versions as things don't change 
drastically that often.
 ## CookieInfo
 
   This is the main cookie struct. It holds all known cookies and related
-  information. Each `Curl_easy` has its own private CookieInfo even when
+  information. Each `Curl_easy` has its own private `CookieInfo` even when
   they are added to a multi handle. They can be made to share cookies by using
   the share API.
 
diff --git a/docs/RELEASE-PROCEDURE.md b/docs/RELEASE-PROCEDURE.md
index dbae96f6e..70609fd70 100644
--- a/docs/RELEASE-PROCEDURE.md
+++ b/docs/RELEASE-PROCEDURE.md
@@ -16,7 +16,7 @@ in the source code repo
 
 - run "./maketgz 7.34.0" to build the release tarballs. It is important that
   you run this on a machine with the correct set of autotools etc installed
-  as this is what then will be shipped and used by most users on *nix like
+  as this is what then will be shipped and used by most users on \*nix like
   systems.
 
 - push the git commits and the new tag
diff --git a/docs/SSL-PROBLEMS.md b/docs/SSL-PROBLEMS.md
index 91803e22d..aaf7bdb59 100644
--- a/docs/SSL-PROBLEMS.md
+++ b/docs/SSL-PROBLEMS.md
@@ -53,9 +53,9 @@
   Note that these weak ciphers are identified as flawed. For example, this
   includes symmetric ciphers with less than 128 bit keys and RC4.
 
-  WinSSL in Windows XP is not able to connect to servers that no longer
+  Schannel in Windows XP is not able to connect to servers that no longer
   support the legacy handshakes and algorithms used by those versions, so we
-  advice against building curl to use WinSSL on really old Windows versions.
+  advice against building curl to use Schannel on really old Windows versions.
 
   References:
 
@@ -77,9 +77,9 @@
   Some SSL backends may do certificate revocation checks (CRL, OCSP, etc)
   depending on the OS or build configuration. The --ssl-no-revoke option was
   introduced in 7.44.0 to disable revocation checking but currently is only
-  supported for WinSSL (the native Windows SSL library), with an exception in
-  the case of Windows' Untrusted Publishers blacklist which it seems can't be
-  bypassed. This option may have broader support to accommodate other SSL
+  supported for Schannel (the native Windows SSL library), with an exception
+  in the case of Windows' Untrusted Publishers blacklist which it seems can't
+  be bypassed. This option may have broader support to accommodate other SSL
   backends in the future.
 
   References:

-- 
To stop receiving notification emails like this one, please contact
address@hidden



reply via email to

[Prev in Thread] Current Thread [Next in Thread]