A voice-activated amateur radio spotting tool for Linux by Evan Boyar, NR8E. Talk Spotter listens to radio audio streams, transcribes speech on-device, and can post spots to the DX Cluster network and POTA via voice commands.
It is similar to but different from CW Skimmer by VE3NEA in several ways, and not just that it's open source. As the signal processing power needed to decode the human voice is greater than what is required for CW, it can only really decode audio from a single frequency & mode at a time. As a result, you will have to set the frequency & mode you'd like your TS instance to listen on ahead of time. A standard list of frequency/mode pairs for the US amateur bands is below in the Band plan section.
I suggest you set it and forget it on a Raspberry Pi. I've tested it on a 3 B+.
- Multiple audio sources: RTL-SDR (local hardware), KiwiSDR (remote), or transceiver via sound card + Hamlib CAT control
- On-device transcription: Uses Vosk for $0 speech-to-text
- Voice command parsing: Say "talk spotter" followed by callsign and frequency to post a spot (see instructions for exact directions)
- DX Cluster integration: Posts spots to the DX Cluster network
- POTA integration: Posts spots directly to Parks on the Air
- SOTA integration: Posts spots to Summits on the Air (OAuth authenticated), coming soon™ (ignore SOTA instructions later in the code, it's mostly implemented but not working b/c the SOTA folks need to get back to me to grant me an API key)
- Designed for Raspberry Pi: Lightweight, minimal dependencies
As Talk Spotter is intended to be used by more than just those who have set up listening nodes, let's try to stick to these frequencies and modes. Send me a PR if you've noticed an issue with one of these.
| Band (m) | Frequency (kHz) | Mode |
|---|---|---|
| 40 | 7278 | lsb |
| 20 | 14278 | usb |
| 10 | 28578 | usb |
| 2 | 147578 | nbfm |
| 70 | 444578 | nbfm |
- Python 3.8+
- Linux
- For RTL-SDR: RTL-SDR dongle (e.g., RTL-SDR Blog V3)
- For KiwiSDR: Internet connection
- For Transceiver: Sound card interface (e.g., Digirig), and optionally Hamlib (
libhamlib-utils) for CAT control
Pre-built .deb packages for x86_64 and ARM64 are available on the Releases page. This is the easiest way to install on Debian-based systems (including Raspberry Pi OS):
sudo dpkg -i talk-spotter_*.deb
sudo apt-get install -f # install any missing dependenciesThe package installs a systemd service, downloads the Vosk model, and sets up a virtual environment automatically. After install:
sudo nano /etc/talk-spotter/config.yaml # set your callsign and radio settings
sudo systemctl start talk-spotter
sudo systemctl enable talk-spotter # start on bootOr run manually: talk-spotter --live
sudo apt install -y libhamlib-utils && git clone https://github.com/EvanBoyar/talk-spotter.git && cd talk-spotter && python3 -m venv venv && venv/bin/pip install -r requirements.txt && venv/bin/pip install soundcard && wget -q https://alphacephei.com/vosk/models/vosk-model-small-en-us-0.15.zip && unzip -q vosk-model-small-en-us-0.15.zip && rm vosk-model-small-en-us-0.15.zip && git clone https://github.com/jks-prv/kiwiclient.gitThen edit config.yaml with your callsign and radio settings, and run: source venv/bin/activate && python talk_spotter.py
-
Clone the repository
git clone https://github.com/EvanBoyar/talk-spotter.git cd talk-spotter -
Create and activate a virtual environment
python3 -m venv venv source venv/bin/activate -
Install Python dependencies
pip install -r requirements.txt
-
Download the Vosk speech recognition model
wget https://alphacephei.com/vosk/models/vosk-model-small-en-us-0.15.zip unzip vosk-model-small-en-us-0.15.zip
-
Install kiwiclient (for KiwiSDR support)
git clone https://github.com/jks-prv/kiwiclient.git
-
(RTL-SDR only) Blacklist kernel modules
This is kinda stupid, but your computer probably thinks it knows how to deal with RTL-SDR-like SDRs. It almost certainly doesn't, and you need to keep it from trying to use its built-in kernel modules. C'est la vie.
Create /etc/modprobe.d/blacklist-rtlsdr.conf:
blacklist dvb_usb_rtl28xxu
blacklist rtl2832_sdr
blacklist rtl2832
Then reboot or unload the modules manually.
-
(Transceiver only) Install additional dependencies
pip install soundcard # for sound card audio input sudo apt install libhamlib-utils # for rigctld CAT control (optional)
Edit config.yaml to configure your setup. It's designed to be pretty human-readable.
To keep personal settings out of git, create config.local.yaml with just the keys you want to override — it's gitignored and deep-merged on top of config.yaml at startup.
For HF SSB reception with the RTL-SDR, you'll need to set "direct sampling: 2", which means we're, uh, directly sampling the Q branch, which is what you should use for HF. Use "direct sampling: 0" for UHF/VHF.
SOTA Setup: This is broken right now, so ignore until the SOTA people finally get back to me. Anyway. SOTA requires one-time authentication. Run venv/bin/python talk_spotter.py --sota-login and follow the instructions to log in via your browser. Tokens are stored locally and auto-refresh, so you only need to do this once.
Note: All commands below assume you're in the talk-spotter directory. Use venv/bin/python to run with the virtual environment's Python (no need to activate the venv first).
Run with settings from config.yaml:
venv/bin/python talk_spotter.pyUse --radio to override the config file:
venv/bin/python talk_spotter.py --radio kiwisdr
venv/bin/python talk_spotter.py --radio rtl_sdr
venv/bin/python talk_spotter.py --radio transceiverConnect a sound card interface (e.g., Digirig) between your rig and computer. Configure the transceiver section in config.yaml:
radio: "transceiver"
transceiver:
frequency: 14278 # tunes the rig on startup (0 = leave as-is)
mode: "usb" # sets mode on startup (empty = leave as-is)
rig_model: 1034 # Hamlib model (0 = audio only, no CAT control)
microphone_substring: "USB Audio Device" # match your sound cardRun --list-audio to see available audio input devices and find the right microphone_substring value:
venv/bin/python talk_spotter.py --list-audioWith rig_model: 0, it streams audio from the sound card without any CAT control - useful if your rig doesn't support CAT or you don't have a serial connection. With a rig model set, it starts rigctld automatically, tunes the rig, and polls for frequency/mode changes.
Find your Hamlib rig model number with: rigctld --list | grep -i "your rig" or just go look at Hamlib's documentation and CTRL+F your rig.
venv/bin/python talk_spotter.py --radio kiwisdrSpot posting via voice commands is enabled by default. Voice command format:
- Say "talk spotter" (wake phrase)
- Say "call" (or "callsign" / "call sign") followed by the callsign in NATO phonetics (e.g., "whiskey one alpha whiskey")
- (Optional) Say "parks" for POTA or "summits" for SOTA, followed by the reference (e.g., "kilo dash one two three four" for K-1234, or "whiskey four charlie slash charlie mike dash zero zero one" for W4C/CM-001)
- Say "frequency" followed by the frequency (e.g., "one four point two one nine" for 14.219 MHz, "one four two one nine" for 14219 kHz, or "twenty eight decimal five" for 28.5 MHz)
- Say "end" or "complete" to post the spot
Speak slowly and clearly. I've found pausing slightly after wake words is helpful. Also it's helpful to say numbers more like "seven point two zero zero" than "sevenpointtwozerozero".
Fields can be spoken in any order. "frequency ... call ... end" works just as well as "call ... frequency ... end".
If you make a mistake, say "cancel" to discard the current command and return to idle without posting anything.
If you don't say "end", the command will auto-complete after ~10 seconds of silence if a valid callsign and frequency were heard. Saying "talk spotter" at any point restarts the command from scratch.
Examples:
Basic DX Cluster spot:
"talk spotter call whiskey one alpha whiskey frequency one four point two one nine end"
Same spot, frequency first:
"talk spotter frequency one four point two one nine call whiskey one alpha whiskey complete"
POTA spot (posts to both POTA and DX Cluster):
"talk spotter call sign whiskey one alpha whiskey parks kilo dash one two three four frequency one four point two one nine end"
SOTA spot (posts to both SOTA and DX Cluster):
"talk spotter call whiskey one alpha whiskey summits whiskey four charlie slash charlie mike dash zero zero one frequency one four point two one nine end"
Frequency formats: A decimal point is interpreted as MHz and converted to kHz internally. No decimal is interpreted as kHz directly. Compound number words like "twenty eight" and "fourteen" BEFORE THE DECIMAL are understood, so "twenty eight decimal five" gives 28.5 MHz = 28500 kHz.
POTA spots require the park reference (e.g., K-1234). Speak it as "kilo dash one two three four" using NATO phonetics for letters and spoken numbers for digits.
Parse voice commands without actually posting:
venv/bin/python talk_spotter.py --no-postFor a clean, real-time view of what's being transcribed:
venv/bin/python talk_spotter.py --liveText appears as it's recognized, updating in place until each phrase is finalized.
By default, Vosk is constrained to only the vocabulary used in Talk Spotter commands: NATO phonetics, spoken number words, and command keywords like "call", "frequency", "end", etc. This reduces false positives from background radio noise and improves accuracy on the words that matter.
Use --no-grammar to disable this and allow Vosk to output any English word, which is useful for comparing results:
venv/bin/python talk_spotter.py --no-grammar
venv/bin/python talk_spotter.py --live --no-grammar
venv/bin/python talk_spotter.py --test-file recording.wav --no-grammarSave received audio to a WAV file:
venv/bin/python talk_spotter.py --save-wav debug.wavTest transcription with a pre-recorded file:
venv/bin/python talk_spotter.py --test-file recording.wavusage: talk_spotter.py [-h] [--config CONFIG] [--radio {kiwisdr,rtl_sdr,transceiver}]
[--debug] [--save-wav FILE] [--test-file FILE]
[--no-post] [--live] [--no-grammar]
[--list-audio]
[--sota-login] [--sota-logout] [--sota-status]
options:
-h, --help show this help message and exit
--config, -c CONFIG Path to configuration file (default: config.yaml)
--radio, -r {kiwisdr,rtl_sdr,transceiver}
Radio source (overrides config)
--debug, -d Enable debug logging
--save-wav FILE Save received audio to WAV file for debugging
--test-file FILE Test transcription with a WAV file (no radio needed)
--no-post Parse voice commands but don't actually post spots
--live Live transcription mode - clean real-time display
--no-grammar Disable grammar constraints (allow any English word)
--list-audio List available audio input devices and exit
--sota-login Login to SOTA (one-time setup for spot posting)
--sota-logout Logout from SOTA (clear stored tokens)
--sota-status Check SOTA authentication status
To run Talk Spotter automatically on boot (useful for a dedicated Pi), create a systemd service:
-
Create the service file
sudo nano /etc/systemd/system/talkspotter.service
-
Paste this configuration (adjust paths and user as needed):
[Unit] Description=Talk Spotter - Voice-activated radio spotting After=network.target [Service] Type=simple User=pi WorkingDirectory=/home/pi/talk-spotter ExecStart=/home/pi/talk-spotter/venv/bin/python talk_spotter.py Restart=on-failure RestartSec=10 [Install] WantedBy=multi-user.target
-
Enable and start the service
sudo systemctl daemon-reload sudo systemctl enable talkspotter sudo systemctl start talkspotter -
Useful commands
sudo systemctl status talkspotter # Check status sudo journalctl -u talkspotter -f # View live logs sudo systemctl restart talkspotter # Restart after config changes
Make sure the DVB-T kernel modules are blacklisted (see Installation step 6).
This is benign with pyrtlsdr. Audio still works correctly.
Enable hardware AGC (agc: true) and use direct sampling (direct_sampling: 2) for HF.
- Ensure audio is clear (check with
--save-wav) - Try a larger Vosk model for better accuracy
- Speak clearly and use standard phonetics for callsigns
- Grammar constraints are on by default; use
--no-grammarto disable if needed
If you see "ModuleNotFoundError: No module named 'vosk'" (or similar), you're not using the virtual environment's Python. Make sure to run with venv/bin/python talk_spotter.py as shown in the Usage section above.
MIT
Copyright 2026 Evan Boyar
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
█████████████████████████████████
██ ▄▄▄▄▄ █ ▄▄ █▀ ██▀ █▀█ ▄▄▄▄▄ ██
██ █ █ ██▄█▀▀ ▀▄ ▄ ▄▄█ █ █ ██
██ █▄▄▄█ █ ▀▀▄ ▄▀█▀ ▄███ █▄▄▄█ ██
██▄▄▄▄▄▄▄█ ▀▄█▄█▄▀▄█▄▀ █▄▄▄▄▄▄▄██
██ ▀▀ ▄█ ▄█▄ █▀ ▀▀█ ▀ ▄██ ██
██ ▀ ▀█▄ ▄ ▀ ▄█▄▀█▀▄▀▀▄▄ ▄█▄██
████▄ ▄▄█ ████▀▄▀▀▄ █ ███▀ ██
██▄▀ ▀██▄▀▄█▀▄█▄▀ █▄ ▀▄ ▄███▄██
███ ███ ▄▄▀▄ ▄ █▀▀ ██▀▄▄ ▀██▀▀ ██
██▄▄ ▄▀ ▄▄ ▀█▀ █▄▄ ▄ ▄▀█▀█ ▄██▄██
██▄▄█▄▄█▄▄▀▄███ ▄▀ █▀█ ▄▄▄ ▀▄▀██
██ ▄▄▄▄▄ █▀▀█▄█▀█▄██▄▀ █▄█ ██▀▄██
██ █ █ ███▄▄ ▄█ ▀▄ ▄▄ ▀▄▀██
██ █▄▄▄█ █ ▀▄▀ ▄ ▄ █▀█▀▄█▀ ▀█▄▄██
██▄▄▄▄▄▄▄█▄█▄██▄▄▄████▄███▄▄██▄██
█████████████████████████████████
