gnunet-svn
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[lsd0003] branch master updated: Added comments


From: gnunet
Subject: [lsd0003] branch master updated: Added comments
Date: Tue, 15 Jun 2021 19:07:23 +0200

This is an automated email from the git hooks/post-receive script.

elias-summermatter pushed a commit to branch master
in repository lsd0003.

The following commit(s) were added to refs/heads/master by this push:
     new ba6fa51  Added comments
ba6fa51 is described below

commit ba6fa51c57e308cd7855f053d5f954dd192d3101
Author: Elias Summermatter <elias.summermatter@seccom.ch>
AuthorDate: Tue Jun 15 19:04:34 2021 +0200

    Added comments
---
 draft-summermatter-set-union.xml | 47 ++++++++++++++++++++++++++++++----------
 1 file changed, 35 insertions(+), 12 deletions(-)

diff --git a/draft-summermatter-set-union.xml b/draft-summermatter-set-union.xml
index c2e154a..c0a8d52 100644
--- a/draft-summermatter-set-union.xml
+++ b/draft-summermatter-set-union.xml
@@ -1648,6 +1648,10 @@ hashSum |    0x0101   |    0x5151   |    0x5050   |    
0x0000   |
                         <dd>
                             is SETU_P2P_DONE as registered in <xref 
target="gana" format="title" /> in network byte order.
                         </dd>
+                        <dt>FINAL CHECKSUM</dt>
+                        <dd>
+                            a SHA-512 bit hash of the full set after 
synchronization. This should ensure that the sets are identical in the end!
+                        </dd>
                     </dl>
                 </section>
             </section>
@@ -1672,8 +1676,10 @@ hashSum |    0x0101   |    0x5151   |    0x5050   |    
0x0000   |
                         <artwork name="" type="" align="left" alt=""><![CDATA[
         0     8     16    24    32    40    48    56
         +-----+-----+-----+-----+-----+-----+-----+-----+
-        |  MSG SIZE |  MSG TYPE |
-        +-----+-----+-----+-----+-----+-----+-----+-----+
+        |  MSG SIZE |  MSG TYPE |     FINAL CHECKSUM
+        +-----+-----+-----+-----+
+        /                                               /
+        /                                               /
                  ]]></artwork>
                     </figure>
                     <t>where:</t>
@@ -1686,6 +1692,10 @@ hashSum |    0x0101   |    0x5151   |    0x5050   |    
0x0000   |
                         <dd>
                             the type of SETU_P2P_FULL_DONE as registered in 
<xref target="gana" format="title" /> in network byte order.
                         </dd>
+                        <dt> FINAL CHECKSUM</dt>
+                        <dd>
+                            a SHA-512 bit hash of the full set after 
synchronization. This should ensure that the sets are identical in the end!
+                        </dd>
                     </dl>
                 </section>
             </section>
@@ -2853,9 +2863,12 @@ END FUNCTION
                         <!-- FIXME: I don't see how the next sentence makes 
sense. If we got a FULL_DONE,
                              and we still have differing sets, something is 
broken and re-doing it hardly
                              makes sense, right? @Christian im not sure about 
that it could be that for example
-                             the set size changes (from application or other 
sync) while synchronisation is in progress....-->
-                        If the sets differ, a resynchronisation is required. 
The number of possible
-                        resynchronisation MUST be limited, to prevent resource 
exhaustion attacks.
+                             the set size changes (from application or other 
sync) while synchronisation is in progress.... something went
+                             wrong (HW Failures) Should never occur! 
Fehlgeschlagen! Final checksum in done/full done sha512-->
+
+                        If the sets differ (the FINAL CHECKSUM field in the 
<xref target="messages_full_done" format="title" />
+                        does not match to the sha-512 hash in our set), The 
operation has failed. This is a strong indicator
+                        that something went horribly wrong (eg. some hardware 
bug), this should never ever happen!
                       </t>
                     </dd>
                 </dl>
@@ -2888,7 +2901,12 @@ END FUNCTION
                          all of the other fragments/parts of the IBF first and
                          that the parameters are thus consistent apply. 
@Christian So we would have
                          to transmit the number of IBF slices that will be 
transmitted first
-                         to do this check right?
+                         to do this check right? Empfangen in mononer 
rheienfolge und prüfen das
+                         es der letzte war.  Sizes immer gleich?
+                         Grösse plausible check:
+                         - initiale bzyantine Upper bound verküpfen auf 
setsize differenz
+                         - Wiederholt das ding kann sich nur verdoppeln
+                         - Genau prüffen durch offer/demands
                           -->
                 </dl>
             </section>
@@ -2900,19 +2918,24 @@ END FUNCTION
                     generating and transmitting an unlimited number of IBFs 
that all do not decode, or
                     to generate an IBF constructed to send the peers in an 
endless loop.
                     To prevent an endless loop in decoding, loop detection 
MUST be implemented.
-                    The simplest solution is to prevent decoding of more than 
a given number of elements.
+                    The first solution is to prevent decoding of more than a 
given number of elements.
                     <!-- FIXME: this description is awkward. Needs to be 
discussed.
                          I think you also do not mean 'hashes' but 'element 
IDs'. @Christian just omit the details
                          i guess anybody can freely decide how to handle loops 
its just important that e protection is
-                         in place. Right?-->
-                    A more robust solution is to implement a algorithm that 
detects a loop by
+                         in place. Right?
+                         - Remove salt and save this in the hashmap
+                         - vorher gespreichert.
+                         - Beides machen
+                         - Nie mehr als MIN(anzahl buckets,total set grössen)
+                         -->
+                    A more robust solution is to implement an algorithm that 
detects a loop by
                     analyzing past partially decoded IBFs. This can be achieved
-                    by saving the element IDs of all prior partly decoded IBFs 
hashes in a hashmap and check
-                    for every inserted hash, if it is already in the hashmap.
+                    by saving the element IDs of all prior partly decoded IBFs 
element IDs in a hashmap and check
+                    for every inserted element ID, if it is already in the 
hashmap.
                 </t>
                 <t>
                     If the IBF decodes more elements than are plausible, the
-                    operation MUST be terminated.Furthermore, if the IBF
+                    operation MUST be terminated. Furthermore, if the IBF
                     decoding successfully terminates and fewer elements were
                     decoded than plausible, the operation MUST also be 
terminated.
                     The upper thresholds for decoded elements from the IBF is 
the

-- 
To stop receiving notification emails like this one, please contact
gnunet@gnunet.org.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]