discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: QAM constellation script


From: Marcus Müller
Subject: Re: QAM constellation script
Date: Wed, 3 May 2023 18:20:10 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.7.1

Hi George,

that's a multiplicative voice scrambler!

The pre-Second World war voice scrambler system "SIGSALY" [1] was kind of similar; just that the scrambling sequence came out of a noisy vacuum tube, not Python's random.random(), and that the combination method was taking samples and adding them modulo 6, instead of multiplying the phase (which is inherently modulo 2π).

So, that should work quite nicely for spreading; it's not as great for the secretive purposes the (later) Allies developed the original scramblers:

1. We're re-using the same 10240 length sequence as scrambling signal –
   that's very fine for the spreading, but from a cryptographic point
   of view, if the secrecy of a message depended on an eavesdropper not
   knowing the sequence, not great, because it's easy with a bit of
   statistics to recover an approximative spreading sequence.
2. Since the bandwidth of the scrambled signal is much higher than the
   unscrambled message signal, an envelope detector on the whole
   bandwidth can probably (I didn't try) recover a pretty intelligible
   version of the audio signal (in the end, if the power of the audio
   signal is low, due to the multiplications, so is the power of the
   signal you transmit; it's kind of similar to AM, but instead of
   modulating a carrier, we modulate a white noise source). Of course,
   knowing the spreading sequence (i.e., knowing how to seed
   random.seed()) gives a large processing gain and thus much better
   SNR at the receiver's output than plain power detection.

But you're no trying to send secret messages – you're just trying to use a wide band of spectrum, and for that, your solution works fine, if your synchronization between transmitter and receiver is good enough in time and frequency.

There's a bit of a hurdle there: No two oscillators are /exactly/ the same, and there's also things like Doppler – so you need to use your preamble (fh==False case) to not only know at which time you need to start your random multiplication, but also to know by how many Hertz your receiver is tuned off the "correct" frequency. If your receiver doesn't have the exact same frequency as the transmitter, you see that by a frequency shift in the baseband, so a multiplication with an exp(2j · π · f_error · t), and that means that the phase of every sample gets shifted by an additional f_error/f_sample more, and the receiver might no longer work. To combat that, you need frequency recovery (which could very nicely be done on a preamble, it's just not inherently trivial to do it with a chirp), and because your receiver's oscillator might still drift over time, maybe adding a pilot tone every so and so many seconds might help.

In practice, what you'd probably do is take your idea and change a few things about it. I'll illustrate the design process with sensible example values below. Sensible in the sense that all the things are available technologies in the open source space, and not too hard to get to work on normal hardware:

It seems you're directly modulating analog audio. Instead, since ca the 1980s, you'd usually use a voice encoder / decoder pair (i.e., a vocoder) to transform the voice into bits, then add error correction to them, then spread these – that's a lot easier, and you get the chance to use the energy you transmit for the bandwidth the "useful" information in your voice actually has, and the energy necessary to make it have the least error in the presence of noise, instead of having to transport the full audio bandwidth! Then, you could design a system to use the simple Direct-Sequence Spread Spectrum (DSSS) method.

DSSS is rather easy: you take a transmit symbol, and repeat it by a spreading factor F, and then flip the sign of the symbols ("chips") according to a fixed pseudorandom bit sequence that receiver and transmitter both share. You send the signal with the full sample rate, meaning that you increase the bandwidth by F (that's the spreading). At the receiver, you flip in the same manner, "unflipping" the original flips, so you just get the noisy original repeated symbols. You add up F of them, which gives you F times the amplitude of the original signal. Because you scale the signal amplitude by F, you get F² times the signal power. Noise is uncorrelated to itself, so the amplitudes don't add up linearly – the noise variance and hence the power does. So, noise power increases  with F, signal power with F², so SNR increases with F.

1. your audio bitstream has a bit rate b. You typically find that by
   listening to a few examples of audio encoders until one has the
   quality you need and nice low bitrate.
   /Example:/ b = 2.4 kb/s (model assumption: voice codec at 2.4 kb/s
   (e.g. codec2[6], or LPCnet)
2. your transmission allows for a certain number of bit errors until it
   gets ugly, so you define a maximum acceptable Bit-Error Rate (BER)
   /Example/: e = 10⁻⁵
3. you choose a class of channel codes (or you try a lot). You pick one
   that achieves your desired e at reasonable complexity with a
   high-as-possible code rate r.
   /Example/: I lazily didn't investigate, but used the DVB-S Return
   Link code (DVB-RCS2) Double-Binary Turbo Code of dimension
   (1880,1504), because
     * it's used for low-power satellite uplinks,
     * it's optimized for satellite-typical SNRs,
     * there's a clever decoder architecture for it which is a bit
       nicer as soon as the SNR gets better than the worst case[5]
     * there's BER curves that other people already measured for me
     * … of an open source decoder implementation that achieves
       megabits in throughbut [4] at the target error rate
     * It has code rate r=4/5 (so, you get 5/4 the amount of bits out
       that you put in), and
     * for an e of 10⁻⁵ you need an Eb/N0 of a bit below 3.4 dB, [3]
       which translates to an SNR of (3.4+ 10·log(3)) dB = 8.2 dB
4. The channel code you chose above has a rate r, so you get a code bit
   rate or T=b/r
   /Example/: r=4/5, b = 2.4 kb/s -> T = 3 kb/s
5. your RF channel allows for a symbol rate S. That means you get to
   send R=S·M bits per second. (M being the bits per symbol)
   /Example: /S = 960 kSym/s , M = 3 (8PSK) -> 2880 kb/s
6. The ratio between necessary data rate T and channel data rate R is
   the spreading factor F that you'll be able to use, giving you 10
   log(F) dB in spreading gain G.
   /Example/: G=2880 / 3 = 960 = 30 dB
7.  From your choice of channel code and acceptable error rate you
   arrived at a necessary SNR after despreading. Subtract the spreading
   gain to arrive at the necessary SNR before despreading.
   /Example/: (8.2 - 30) dB = -21.8 dB (yes, that's a negative SNR)
8. You calculate a link budget, which tells you how much TX power you
   need for that SNR you need, given the free space path losses up and
   down, the antenna gains, and the noise figures.
   /Exampl//e/: I asked Daniel Estévez [2], and he says that if you
   transmit at 37 dBW EIRP towards QO-100's WB transponder, you get 8
   dB SNR at the receiver on a ca 100 kHz channel. For the full 1 MHz,
   we get 10 dB that noise power, so we need 47 dBW EIRP.
    1.   If we can live with some dB less SNR, we can live with the
       same dB less transmit power, essentially.
    2. So, for an SNR of X dB, we need (X-8+47) dBW transmit power =
       (X+39) dBW transmit EIRP
    3. Thus, for a receiver SNR of -21.8 dB + receiver noise
       figure(NF), we need (17.2+NF) dBW EIRP
    4. guessing a 10 GHz receiver has some 8 dB NF, 25 dBW EIRP
    5. using a lossless 1m dish at 2.4 GHz, we get some 25 dBi gain, so
       that means we need 0 dBW transmit power, 1 W @ 2.4 GHz. That
       sounds doable.

This almost sounds too nice, doesn't it? If life has taught me anything that it's a terrible idea to do arithmetics in public, so I bet as soon as I hit "send" on this, I, or someone else, will find a mistake in my calculations, but in case that doesn't happen, it would sound as if going the classical path, namely, digitally encoding your voice and spreading the result, and applying a channel code¹ underway, be a relatively straightforward path.

Best regards,
Marcus

[1] https://www.nku.edu/~christensen/SIGSALY.pdf
[2] https://mastodon.social/@destevez/110300153690549977
[3] https://aff3ct.github.io/comparator.html?curve0=64ba6b8&curve1=36bd9fa&xaxis=Eb%2FN0&yaxes=BER%2CFER&xrange=-2.3039210041143505,9.460784878238586&yrange=-13.906029079828555,3.8769511346687824 [4] https://github.com/aff3ct/aff3ct/blob/c68f71c1be98a3d07d511883755632a2aa734c51/doc/source/user/simulation/parameters/codec/turbo_db/decoder.rst [5] T. Tonnellier, C. Leroux, B. Le Gal, C. Jego, B. Gadat and N. Van Wambeke, "Lowering the error floor of double-binary turbo codes: The flip and check algorithm," /2016 9th International Symposium on Turbo Codes and Iterative Information Processing (ISTC)/, Brest, France, 2016, pp. 156-160, doi: 10.1109/ISTC.2016.7593096. [6] Listen to an example of Codec2 working at 2.4 kb/s: https://www.rowetel.com/downloads/codec2/hts2a_2400.wav I'd say even compared to a 2.4 kHz wide SSB signal with no noise at all, that's worlds better. (and if you use more than 1 bit per channel access, you'll need less than that bandwidth)

¹ this is the standard situation for me to start a flamewar on the nature of DSSS: from a coding perspective, DSSS is just a repetition code. Repetition codes are what we call "bad", so instead of concatenating a r = 1/F DSSS-repetition-code after a r=4/5 Turbo code, we'd be much much better off just using a r=b/R "proper" code to begin with. I guess the thing is that decoding complexites of very low rate code decoders are usually not fun at bad input SNRs.

On 5/2/23 07:26, George Katsimaglis wrote:
Hi Marcus,

Thanks for your detailed answer!!!
Can we consider this approach as a new spread spectrum technology or is really an existing one?

Best regards

George SV1BDS

Στάλθηκε από το Ταχυδρομείο Yahoo σε Android <https://go.onelink.me/107872968?pid=InProduct&c=Global_Internal_YGrowth_AndroidEmailSig__AndroidUsers&af_wl=ym&af_sub1=Internal&af_sub2=Global_YGrowth&af_sub3=EmailSignature>

    Στις Δευ, 1 Μαΐ, 2023 στις 23:14, ο χρήστηςMarcus Müller
    <mmueller@gnuradio.org> έγραψε:
    Hi George,
    thanks for the reply!

    >>    "VCO generator":

    >    It produces two different vectors depending of fh Boolean
    value. A sawtooth vector of
    >    values between  -0.5 and 0.5 or the same random values
    between -0.5 and 0.5. The
    >    sawtooth values used in alignment phase (adjusting myblock).

    Yes, indeed! But I was focussing on the self.fh == True case.
    >>    "Repeat":
    >
    >    This block offloads the previous block as is too heavy to
    produce random numbers at
    >    the rate needed.

    Well, you did not write it very efficiently, but agreed, if you
    just need to repeat the
    vector, by all means, this is a nice way to do it.


    >>    "VCO (complex)":
    >    The VCO complex, with the values specified, produce
    frequencies between -500 kHz for
    >    -0.5 and +500 kHz for +0.5 input. This block creates the
    frequency change.

    But only for fh == False. For fh == True, you're really just
    piping in random numbers to a
    mapper that maps the random numbers from [-0.5,+0,5] to a point on
    the unit circle with
    phase [-π;+π]. This phase is then what is output *for every
    sample*, separately. Your VCO
    really does only this:

    output[i] = output[i-1] · exp(1j · sensitivity/sampling_rate ·
    input[i]),

    and in your case, sensitivity/sampling_rate == 2π ,

    output[i] = output[i-1] · exp(1j · 2π · input[i])
              = output[i-1] · random phase increment in [-π;+π].

    and because your input is just random independent numbers between
    -0.5 and +0.5, you just
    get random independent numbers on the output: (pseudo-)White noise.

    Connect a QT GUI Frequency Sink to the output of your VCO
    (complex), set fh==True and
    looks how flat and random the output spectrum is.

    (I'm attaching a subgraph of your flow graph with that sink, and
    also a screenshot from
    the QT GUI Frequency Sink)

    >>    "Multiply":
    >    It moves the USB voice signal by frequency created from
    previous steps.

    Sorry, definitely no USB created anywhere! If that was the case,
    the QT GUI Frequency Sink
    mentioned above would have to show zero for the upper half (before
    you complex conjugate),
    or the lower half of the spectrum, because you shift your
    0-frequency-symmetric message
    signal spectrum by the frequencies in the spectrum of your VCO's
    output, and if you only
    want them to end up in the USB, then all these "shifting"
    frequencies would have to be in
    the upper half of the spectrum.

    >    You can better understand it considering frequencies rather
    than phases.

    I'm about to say the same to you :)
    Notice that frequency is the derivative of the phase. In your VCO
    block, you generate
    completely random phase increments. The derivative of that is just
    again complete
    randomness – every single sample.

    Anything that you really can say "has a frequency" needs to have
    the same phase increment
    for multiple samples. But you're switching the phase increment
    with every sample -
    completely randomly.

    This really nicely spreads the signal power from narrowband input
    signal into the full
    sampling rate bandwidth, but it's really not frequency hopping.

    Best regards,
    Marcus




reply via email to

[Prev in Thread] Current Thread [Next in Thread]