gnunet-svn
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[taler-docs] branch master updated: sandbox DD


From: gnunet
Subject: [taler-docs] branch master updated: sandbox DD
Date: Thu, 10 Feb 2022 11:17:33 +0100

This is an automated email from the git hooks/post-receive script.

dold pushed a commit to branch master
in repository docs.

The following commit(s) were added to refs/heads/master by this push:
     new f324098  sandbox DD
f324098 is described below

commit f324098803807019b72c10175390c8ffcfa8490e
Author: Florian Dold <florian@dold.me>
AuthorDate: Thu Feb 10 11:17:30 2022 +0100

    sandbox DD
---
 design-documents/027-sandboxing-taler.rst | 107 ++++++++++++++++++++++++++++++
 1 file changed, 107 insertions(+)

diff --git a/design-documents/027-sandboxing-taler.rst 
b/design-documents/027-sandboxing-taler.rst
index b082ae5..c1e5293 100644
--- a/design-documents/027-sandboxing-taler.rst
+++ b/design-documents/027-sandboxing-taler.rst
@@ -12,6 +12,39 @@ Summary
 This document presents a method of deploying all the Taler
 services via one Docker container.
 
+Motivation
+==========
+
+It is very difficult to build GNU Taler from scratch.  It is even more 
difficult
+to install, configure and launch it correctly.
+
+The purpose of the sandbox is to have a demonstration system that can be both
+build and launched with ideally a single command.
+
+Requirements
+============
+
+- No external services should be required, the only dependencies should be:
+
+  - podman/docker
+  - optionally: configuration files to further customize the setup
+
+- All services that are used should be installed from repositories
+  and not built from scratch (i.e. debian repos or PyPI)
+
+- There should be some "admin page" for the whole sandbox that:
+
+  - Show an overview of all deployed services, a link to their documentation
+    and the endpoints they expose
+  - Show very simple statistics (e.g. number of transactions / withdrawals)
+  - Allows generating and downloading the auditor report
+
+- Developers should be able to launch the sandbox on their own machine
+
+  - Possibly using nightly repos instead of the official stable repos
+
+- We should be able to deploy it on $NAME.sandbox.taler.net
+
 Design
 ======
 
@@ -46,13 +79,87 @@ Open questions
 
 - How to collect the static configuration values?
 
+  - => Via a configuration file that you pass to the container via
+    a mounted directory (=> `-v $MYCONFIG:/sandboxconfig`)
+  - If we don't pass any config, the container should have
+    sane defaults
+  - This is effectively a "meta configuration", because it will
+    be used to generate the actual configuration files
+    and do RESTful configuration at launch time.
+
 - How to persist, at build time, the information
   needed later at launch time to create the RESTful
   resources?
 
+  - => The configuration should be done at launch-time of the container.
+
 - Should we at this iteration hard-code passwords too?
   With generated passwords, (1) it won't be possible to
   manually log-in to services, (2) it won't be possible
   to write the exchange password for Nexus in the conf.
   Clearly, that's a problem when the sandbox is served
   to the outside.
+
+- How is data persisted? (i.e. where do we store stuff)
+
+  - By allowing to mount some data directory on the host of the container
+    (This stores the DB files, config files, key files, etc.)
+  - ... even for data like the postgresql database
+  - future/optional: we *might* allow connection to an external postgresql 
database as well
+
+- How are services supervised?
+  
+  - SystemD? gnunet-arm? supervisord? something else?
+
+    - SystemD does not work well inside containers
+
+  - alternative: one container per service, use (docker/podman)-compose
+
+    - Either one docker file per service, *or* one base container that
+      can be launched as different services via command line arg
+
+    - Advantage: It's easy to see the whole architecture from the compose yaml 
file
+    - Advantage: It would be easy to later deploy this on kubernetes etc.
+
+    - list of containers:
+
+      - DB container (postgres)
+      - Exchange container (contains all exchange services, for now)
+        - Split this up further?
+      - Merchant container
+
+- Do we have multi-tenancy for the sandbox? (I.e. do we allow multiple
+  currencies/exchanges/merchants/auditors per sandbox)
+
+  - Might be simpler if we disallow this
+
+- How do we handle TLS
+  
+  - Do we always do HTTPs in the sandbox container?
+  - We need to think about external and internal requests
+    to the sandbox
+
+- How do we handle (external vs internal) URLs
+
+  - If we use http://localhost:$PORT for everything, we can't expose
+    the services externally
+  - Example 1: Sandbox should run on sb1.sandbox.taler.net.
+
+    - What will be the base URL for the exchange in the merchant config?
+    - If it's https://sb1.sandbox.taler.net/exchange, we need some /etc/hosts 
entry
+      inside the container
+    - Once you want to expose the sandbox internally, you need a proper TLS 
cert (i.e. letsencrypt)
+    - Inside the container, you can get away with self-signed certificates
+    - Other solution: Just require the external nginx (e.g. at gv) to reverse 
proxy
+      sb1.sandbox.taler.net back to the container. This means that all 
communication
+      between services inside the sandbox container goes through gv
+
+      - Not great, but probably fine for first iteration
+      - Disadvantages: To test the container in the non-localhost mode, you 
need the external proxy running
+
+- Where do we take packages from?
+
+  - By default, from the stable taler-systems.com repos and PyPI
+  - Alternatively, via the nightly gv debian repo
+  - Since we install packages at container build time, this setting (stable vs 
nightly)
+    results in different container base images

-- 
To stop receiving notification emails like this one, please contact
gnunet@gnunet.org.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]