Probably one of the most ‘dark magic’ mystified aspects of RF transmission is how and why weak signals and subsequently poor SNR translate to becoming poorly resolved or even not resolved in any kind of usable fashion.
Noise over taking the signal, i.e. more inherently present, induced or locally generated or received is of a magnitude which starts to seriously mask the critical blend of carrier and modulation. Not the biggest issue in SSB which is either carrier suppressed or carrier reduced, but in modulated transmission modes where a carrier is used and expected, losing the carrier to noise and having residual traces of the modulated shifted or amplified peaks is next to useless in fsk telegraphy and subsequently in FSK based DV transmission since (depending on the system itself, will either be pure FSK telegraphy transmission or modulated within a voice bandwidth modulation - the FSK telegraphy equivalent of MCW) it’s all telegraphy regards of what is sent.
So having established that telegraphy over RF and subsequent physics is central to this -
A) What happens to the carrier when it’s level gets critical low?
Two things happen, firstly the actual perceived frequency becomes less easy to determine/discriminate as some of the cyclic transition gets lost or ceases to perceptibly exist. On a receiver with little or no front end filters, this happens far later in the deteriorated state as the poor or none existent often WB filtering has a passband and lack of aggressive attenuation which makes very weak signals remain discrimation capable at really low magnitudes, but since the receiver will swamped by strong adjacent signalling and harmonics from lower frequencies and other ‘out of band’ emissions, that ‘quality’ is not a quality desired at all outside of ‘sniffer’ detectors.
So what happens is as a signal falls into a broken carrier state, the capture effect of FSK and FM receivers fails to see it as within selected frequency and passband - and an aggressive, but desirable, narrow RF filter magnifies how quickly the broken carrier and modulation ceases to exist as far as receiver goes.
Where this happens in a shifting level (where the RMS level shifts drastically and frequently), what passes the NB filters gets misread post demodulation, hence errors are generated. Get enough of these in sequence and the whole packet of data is dumped on good systems (or audio muted dropout as heard on DV).
So, under comprised carrier conditions, the residual ‘shifted differences’ are essentially out of the passband where carrier isn’t determined and so get lost too. Which in itself creates perceived discrimination errors and integrity errors at the TGPY level.
So whether it’s FSK over wire or wireless transmission, you need signalling to be above a given threshold to be properly resolvable. At the other extreme, you can get (in poor TX setups) over modulated signals which can result in too far above threshold (think of clipping in audio terms as a comparison) or worse still in general, over deviation in wireless context if driven too hard.
So there’s, just like with digital audio recording, upper and lower thresholds the modulation needs to exist within as well as the carrier being averaged in integrity enough to survive).
If you monitor frequencies and digital circuits, you’ll find there’s a definate upper and lower degrees of modulation and deviation employed in functionally sound telegraphy and when what’s passed remains inside those constraints, you get maximum integrity or low to nil errors happens by the time it’s decoded.
Why it happens, when things fall way out of threshold and SIG’s become useless is actually more Ohms Law physics than radio or wired telegraphy principles governed.
Take a power source, cycle it on and off Morse Code style and look on at an oscilloscope, and you’ll see pulses that are neither pure square wave or any orher pure waveform. Looking, however at both the magnitude, and with measurement, you’ll discover both current and voltage are proportionally high and steady, where it’s duration varying (variation of duty cycle) and the RMS level (made up of both voltage and signal levels) is less than current and voltage peaks, but is constant in proportion.
It’s that consistency that gets deteriorated over distance travelled as EM over wires and wireless transmission. Because the RMS result reading is determined by magnitude and strength, when signalling level drops, so does the effective RMS of the carrier and by inherent proportion, so does the modulated content.
On a mode by mode basis, this has different varied effects on the demodulated results, but the underlying physics is constant to all frequency shifted modulated signalling regardless of mode.
A telegraphy technical term, signalling current, is what you are actually trying to achieve a high consistency with mostly, since as current as measured gets lost, so the voltage reduces in proportion and ultimately the SIG level over loses it’s integrity over distance. Now whilst current levels and perceived voltage are intertwined, when resolving loss over distance, it’s actually signalling current you are trying to improve/restore as achieving that results in a more consistent RMS of the signalling.
That’s essentially why, in wired telegraphy and telephony, there are staggered amplifiers used to compensate and/or repeaters. The general purpose role of these it to try to restore signal current and reduce the degrees of loss and restore the overall signalling strength and subsequently raise/restore the SNR.
Radio repeaters, and ‘range extenders’ (RE’s are usually simplex vs half-duplex or duplex as found on repeaters) are, in wireless terms, performing a similar roles in essence by taking an input signal and relaying a higher magnitude output for extended distribution.
There are quirks and inherent differences in the nature of both amplifier and transfer of content between input and output on both wired and wireless contexts, but you’d need to venture into Regenerative amplification and Regenerative translation to understand the difference over pure ‘boost it’ analogue style amplification. And because this is already around three fathoms deeper than the average radio user ever goes, I’ll save that aspect for a series of write-ups about galvanic vs thermal noise and how regenerative vs analog application addresses the ingress and effects of thermal and galvanic noise since they are more physics in concept than ‘radio’ in immediate terms.
If your school physics (at the very least) recollection isn’t totally MIA over time, some of the content of this will register at some level. Noise is nobody’s friend in a signalling sense, as truely disruptive noise is as inconsistent to manageably exploit to disrupt maliciously as it’s a major pain in general. Then again, the ‘accidentally generated’ noise from the harmonically intrusive ‘Woodpecker’ Druga OHR system does nicely demonstrate how enough ERP and the right spread and magnitude of signally shifts can be manageable and consistently disruptive effect if you really try, and I don’t believe the design accidentally resulted in it’s side effects at all, and I’ve seen no supporting verifiable evidence to support the ‘accident of design’ unintended collateral effect claim that was given. But it’s also, when it’s a genuine case of such signalling being a case of literally unforeseen consequence, a good demo of how well it’s easy to accidentally create massive RF issues with not a lot of effort if you don’t take precautions and assess at the prototype stages comprehensively. But that’s a whole separate topic, but Druga being inherently telegraphic in form, is a good example of the horror stories you can find of extreme impact of ill-conceived and poorly managed RF tgpy type transmission.