If you missed my last installment, the summary is that I left for Hamvention with a partially working Whitebox Bravo.  The device turned on and all the chips responded to my SPI commands and other control signals, but my PLL’s were not locking.

There is a very long story of things I tried a long the way.  I’ll cover the high points.  At first, I thought perhaps the short’ed clock net had b0rked  the clock buffer for the PLL.  I tried booting up a new board that had never been powered on with the manufacturing defect.  No dice.

Luckily, the RF PLL, the ADF4351 has a way to see inside the Phase Frequency Detector (PFD) to see why it would not be locking.  When I looked at the output for the VCO divider, things looked alright.  But the PFD output for the clock indicated the signal wasn’t making it into the chip.

This proved that the IC wasn’t totally fried, and now I knew it had to be something really silly.  I decided to follow my gut, and change from a resistor (DC coupling) to a capacitor (AC coupled) on the net from the TCXO to the PLL.  And walah.  As soon as I made this change, both PLL’s worked perfectly (I know, another Technician class mistake!)  Here’s the output of the phase frequency detector:

So it wasn’t anything major, and sadly I didn’t have my first real world need to solve the Laplace Transform.  But I learned a valuable lesson:

Don’t do little changes at the end, right before going to print.  Even if they’re little.

I added a bunch of passives around the TCXO to allow for the option of plugging in a 10 MHz reference clock via an SMA connector.  This is a popular ask and critical for radio astronomy as well as cell phone base stations.  I made this addition very late in the design cycle and didn’t receive anything more than a short “looks good to me” design review on it.

I would have saved a lot of headache for myself if I had stuck to the original set of features.  There can always be another revision for more new stuff.

First Transmit

So now, it seemed like the radio was operational.  I had two locking PLL’s.  I had a transmit chain coming from the FPGA, and then being encoded by my DAC.  I set up the real test - hooking up the transmit SMA jack on the Whitebox to a receiver on my USRP.

Thanks to my extensive debugging of the project, I got a signal to come out first try at exactly where I expected it to be!

image

What you’re seeing here is:

  • A 1.2khz AM modulated signal coming out of the DAC
  • Which was mixed with a carrier at 144.39 MHz.
  • Then sent to the USRP over a coaxial cable.
  • Then down mixed with a 144MHz center frequency.

You can see both the CW tone as well as harmonics.  Remember, the CMX991 radio chip is unbuffered and unfiltered as I have yet to choose a particular applications of this building block circuit.  I would like to start the discussion about what topologies and components should be at this stage in the circuit (Please join the discussion below if you have any ideas on this!)

What’s Next

Now things are going to get fun.  The hardware is there and functioning as a radio.  The base drivers are there to control the hardware, turning on and off different parts for power savings.

It’s time to do some Digital Signal Processing.  This is the Black Magic I wanted to learn in the first place.

The start for trying out new ideas in this arena is to have a well-tested signal processing engine hook into the hardware.  And there’s no better signal processing system for radio than GNURadio.

Step one is to get data from GNURadio to the Whitebox over Ethernet. My last change got this to work at a sustained 50,000 samples / second, or 50kS/s.  Each sample is two 16-bit 2’s complement numbers (say that 64 times fast!), representing the quadrature data.

image

I dumped a snapshot of samples flowing from GNURadio, over UDP and to my user space program.  I used pylab to generate this graph.  Thank you to the developers here, integrating matplotlib into a 1 line command line.  With all the Python libraries (including SciPy) you end up with a feature set that approaches MATLAB.

After samples hit the user space program on the radio, I need to get the samples across the ARM, and to the FPGA for interpolation.  This is done over the on-board System-on-a-Chip Bus.  The bus I’m hooking into on the SmartFusion is called the APB3 bus.

Here’s a sneak preview of what I want to talk about next: designing, simulating and synthesizing the digital exciter so that way it works seamlessly from Linux to the antenna.

image

This simulation was done 100% in Python, and uses a Bus Functional Model to simulate the ARM processor corresponding with the core I’m writing.  The purple sawtooth wave in the lower right demonstrates the capabilities of the exciter to pick up samples via the AP3 bus, then in buffer in a FIFO to enter the DSP clock domain.  In the signal chain we interpolate and interleave the signal to be encoded by the quadrature DAC.

9:06 am  •  25 June 2013  •  View comments

This tale starts with me wanting to make 10 radios for Hamvention.  I had done an Alpha board for fun, to see if I could do it.  And some things worked, but others did not.

In order to make 10, I had to improve on my manufacturing technique from a hot plate.  I wanted to try using the surface mount assembly lines, with all their automation.  And boy did I learn a lot during that process.

I fixed up my bill of materials.  I learned how to order in quantity for an assembly job.  I actually worked with manufacturers to make this thing a reality.  It was a ton of work, and I scrutinized every aspect of the Alpha design.  But how much fun it was to see the robots assemble the units!

The PCBs were manufacturered via PalPilot, who proxies their work to Asia.  However, the assembly facility called Outsource Manufacturing Inc. is a short drive from my home in LA.  Located in Carlsbad, a great city with some solid surf.

But today I wasn’t surfing, I was holding the first finished Whitebox Bravo.  They were just done with their quality assurance step.  I held it in my hand; looked at every component perfectly placed on the PCB.  I did what anyone who just made a device with 200 components, 300 holes, and 4000 wires would do… I plugged it right into my computer to see if it ‘just worked’.

image

And the kernel did come up; but there was a problem.  The board was HOT to the touch. Near the synthesizer subsystem…

At first, I thought I must have messed up my schematic, or layout, or something.  I drove home from the assembly facility wondering if I had just built a paper weight.  Comparing schematics from old to knew, everything looked good.

Luckily, I had prepared for shorted power nets.  There was an 0805 jumper resistor on the synthesizer voltage rail, which I removed.  This had the effect of turning off the entire subcircuit.

I powered the device up, and current consumption was normal.  So, I had isolated where the error was, somewhere in the synthesizer.

It took me a bit to finally see the issue.  I pooched the direction of the designator for the TCXO (Temperature Controlled Oscillator) and it was on backwards, shorting power to ground and ground to power.

I went for it.  With the help of my Aoyue 968A rework station, I pulled the crystal off with a heat gun; then re-soldered it on with the right orientation. I re-applied the jumper.

Power it up… and this time the PLL IC (ADF4351) responds to SPI commands.  Oh yes, sweet victory!

With the board now electrically sound, I started to run it through some paces.

  • I brought up a custom kernel image via TFTP that configures an NFS mount.  Everything came up correctly.  Check.
  • I flashed the FPGA firmware with a new version.  I enabled the DAC and a firmware based signal generator using a look-up table (LUT).  My clean sine wave still looked clean.  Check.
  • The ADC changed its current consumption as I toggled it’s power down bits.  Check.
  • The quadrature transceiver, the CMX991 would latch registers.  I could read the values back.  The built-in PLL would not lock.  Meh.
  • The external PLL, the ADF4351 would latch registers.   I could see it’s current consumption go up as I turned on and off parts of the chip.  But it too would not lock.  Meh.

So, everything was working except for the synthesizer… and you can’t make a superhet radio if you don’t have any synthesizers.

Specifically the PLLs would not lock.  And getting them to lock meant I needed to actually understand every aspect of their function, which I had not tackled yet.

So I had to go to Hamvention with a board that turned on, but that did not have locking PLL’s.  But that’s okay.  As it turns out, the board was mostly in people’s hands.  It’s cool to hold a whole SDR in the palm of your hand, even if it’s not turned on yet.

I would have to wait until I got back in front of my bench to investigate why those PLLs wouldn’t lock.

But I’ll save that adventure for Part 2, including a new changelist that results in a fully operational Whitebox Bravo.

9:47 am  •  11 June 2013  •  View comments

My dream is to contribute to freeing the Internet from it’s bondage to wires.  This is something we’ll have to do if we ever hope to really battle Internet censorship bills like PIPA, SOPA, and now ACTA; Or to prevent the government from looking into your proverbial home-on-your-smartphone without a warrant.

That’s why on December 18, 2011 (one year ago this week), I decided to build something called a software defined radio. For those who don’t know what a software defined radio is, Eric Blossom, an early innovator in the space wrote this:

Software radio is a revolution in radio design due to its ability to create radios that change on the fly, creating new choices for users…  Perhaps most exciting of all is the potential to build decentralized communication systems…. A centralized system limits the rate of innovation. We could take some lessons from the Internet and push the smarts out to the edges…. These user-owned devices would generate the network. They’d create a mesh among themselves, negotiate for backhaul and be free to evolve new solutions, features and applications.

After reading that, I started in earnest.  Having a background in Computer Engineering and focusing on something called FPGA hardware design, I knew that I could build a device even though it was to be my first radio design.  And a few weeks ago, I got my first transmission to come out of the antenna!

Let me take you through a tour of the board, so that way you can get a feel for where the project is at, where my issues lie, and what the next steps are.

Overview

Ther core of the system is the Microsemi SmartFusion series customizable System on a Chip (from now on called a SoC.)  A lot of big manufacturers including Xilinx, Altera, Cypress, and Microsemi all have SoC devices that contain at least one ARM processor, as well as programmable logic.  In my case, there’s a Flash based FPGA fabric hooked up to an ARM Cortex-M3, which I’ve used to do all (yes, all!) of the signal processing.

That’s right, since the whole system sits on a single chip, a high speed interconnect over Ethernet or USB to a host computer isn’t necessary.  Everything happens on the board, with a theoretical maximum of 16 Gb/s interconnect between processor and FPGA.

I’ve realized a portable device, that I’m holding here running off of 4 AA batteries:

image

The SoC talks to a highly integrated radio frontend chip, the CMX991 quadrature transceiver from CML Microsystems.  This IC transmits and receives from 100MHz to 1GHz.  It’s designed to be the core of a P25 radio, which is the standard used in police radios, but since it’s quadrature and can mimic any mode, it can do so much more once hooked up to the SoC.  I chose this chip because it covers the 2m and 70cm amateur radio bands, which are the two most popular VHF/UHF bands.

Development Environment

I really enjoy my development environment, thanks to the countless hours others have put into Open Source hacking, and especially the work done by the engineers over at Emcraft Systems.  I have uClinux 2.6 running, and have access to a BusyBox shell via UART over USB, as well as the full networking stack connected to a 10/100 Mb/s Ethernet jack on the board.  This allows me to load custom built kernels over TFTP, and to mount my development files over NFS.  C files are cross compiled with GCC for the ARM Cortex-M3 target.

The programmable logic has been written in Python, and then transpiled to Verilog by a tool called MyHDL.  This has allowed me to develop using high level constructs like unittests and object inspection.  No small feat, if you’re familiar with coding Verilog.  I’ve simulated the entire radio on my development system, including having it talk to a GNURadio receiver, all virtually.  This means I can iterate on ideas quickly and verify their correctness before I go through the lengthy process of synthesis and programming onto my board.

I’ve written a simple C program that runs on the device, mainly to talk to the FPGA.  I issued the following commands to get the radio to transmit:

$ radio power_down

Put the radio into a sleep state, in this state it consumes 120mA @3.3V, or 400mW.  There’s an even lower power mode with the real time clock only turned on, but I haven’t gotten to enabling that mode yet.

$ radio power_up

Power up the DAC, ADC, Radio frontend, and warm up the VCOs.  This state consumes 220mA.

$ radio mode tx_test

Put the radio into the tx_test mode, setting up the radio frontend registers.  This also sets up the FPGA to flow a CW 1200Hz audio tone though an AM voice modulator.

$ radio dial 144.39e6

This dials the VFO to ~99.2 MHz (doh! check out below for my discussion on this issue), and bumps up the radio to consuming 330mA.

$ radio tx

This outputs a 1200Hz amplitude modulated tone for 10 seconds.  I know this is bad in the real world, but the transmit power is at -36dB at the moment so it wouldn’t radiate out of my room.  You can see my goal though - to build an APRS transmitter & receiver.  During transmit, the device consumes 360mA, at 3.3V.  This is ~1.2 Watts of power, for the entire software radio transmitting.  Plenty of budget for a power amplifier!

Analog Signals

Lets follow the trail of a transmit using my oscilloscope to see what’s happening.  First, when I issue the tx command, the 1200Hz wave comes out of the DAC thanks to the Direct Digital Synthesizer I wrote inside of the FPGA.

I would like to increase the dynamic range coming out of this circuit, as the radio frontend can take a full Volt peak-to-peak as input.  I also need to add a low pass filter.  I know, a rookie mistake!

The analog baseband signal coming out of the DAC is then up converted by two analog mixers in the radio frontend.  First, the IF VCO:

image

I’ve got three issues of note here:

  • This should be a hard 180MHz signal, but for some reason it seems to vary anywhere from 170MHz to 190MHz depending on the moons and the tides (in other words, I’m at a bit of a loss on to the stability issues here.)  Also, I need a spectrum analyzer to really get a good sense of what frequency is really coming out since the oscilloscope says two different frequencies on the same display.  Slightly maddening.
  • I need to make the traces wider for this section of the board, as you can see the 3.68mVp-p with a 50ohm impedance oscilloscope probe is just squeaking by as a signal.  Yeah, I know, I didn’t impedance match the RF traces yet.
  • I missed a resistor in the negative resistance amplifier filter, so I’ve had to cut the board with an exacto knife, and have a rogue through hole resistor flying off into the third dimension.  This really limits the portability, as I seem to snap it off even when I move the board across my desk!

image

The IF VCO is divided by four, and then mixed with the incoming signal from the DAC. It is then mixed again from this intermediate frequency with the RF VCO, this time divided by two.  Here’s a snap of the RF VCO:

image

I also have a frequency problem with this circuit.  When I tune it to the lowest frequency, 35MHz, it runs like a champ.  But when I go higher, things start to drift off from ideal.  In this case, I had dialed into 198.78MHz, which is what I need to dial to the APRS channel 144.39MHz.  But alas, it is off by a wide margin.

So, finally, here’s the fully mixed RF signal:

image

That’s the 1200Hz tone I explained in previous posts, but now it’s been upconverted and AM modulated to a 99MHz RF carrier.  There’s definitely some noise in there, and this is where I think the “rf black art” is starting to come into play.

Next Steps

I’ve turned on the receiver part of the circuit, but haven’t gotten too far into it since I can’t lock onto a real transmission with my VFO.  This means, my most immediate concern is figuring out how to get the VFO to hit the mark.

I was concerned that I will have signal integrity issues, and I’m sure that if I cranked up the dial to the theoretical 1GHz maximum I would be in problems.  I will go back and impedance match to a good circuit board material for the next revision.

But for all of those issues, I have to say that I’m quite pleased with what I’ve accomplished in one year.  I built this board even though I knew there’d be problems, but I wanted to prove two things:

  1. That I could assemble all the parts and get them to work together.
  2. That the device would run off of batteries.

I’ve accomplished both of those tasks.  Now, it’s going through the motions to fix the known issues.  I bet with the help of the right RF engineer and a design review or two of my circuit board, I could produce a second revision of the board that worked quite well.

It’s still a “test device” since it’s not a fully packaged radio, but the novelty of having the whole software radio fit in your palm is exciting.  I think this is an RF hacker’s dream.  So if anyone wants to help me debug these issues (please, please PLEASE!) I would be happy to hook you up with a second rendition of the board for your help.  Any takers please email me testac (at) <google’s public email domain>.com

I’ve also thought extensively about what the software is that I want to run on the ARM and FPGA.  I’ve even sketched how to fit the whole AM/FM/SSB/CW/APRS & more inside of the chip.  I still have lots of testing to do to find the best mix, but expect some posts on this stuff soon.

Thanks

I’d like to send a shout out to a bunch of people who’ve helped me get to this point:

Julia Cameron, author of the Artist Way book series, which got me to follow my boyhood dream of designing circuits.

Aaron Schulman @ University of Maryland, Ye-Sheng Kuo @ Unversity of Michigan, Thomas Schmid @ University of Utah, and Prabal Dutta @ University of Michigan.  Without their help, Open Source mindset, and access to university equipment I wouldn’t have made it this far.

The TAPR and Amateur Radio organizations for their interest and helping me along the way, especially Steven Bible and John Ackerman.  I also need to thank Bruce Perens who’s inspiring talks helped guide me over the years.

Important figures in software radio including Eric Blossom, Matt Ettus, and Tom Rondeau,  who’s achievements make anything seem possible.

To Google and YouTube for teaching me the ways of really engineering a solution to a problem.  To the Adly team, especially Derek Rey who’s been an incredibly supportive friend and colleague.  And to my parents for being there, always there, no excuses.

Thank you all!

1:47 pm  •  21 December 2012  •  View comments

I’ve spent the past 12 months building some hardware that I could use to test out my ideas.  And this morning, I took a big step in realizing those ideas.  I wrote some code in Python to generate a sine wave, and used nice features of scipy and matplotlib to do analysis on those sine waves.  Nothing out of the ordinary there.  But then, I used MyHDL to convert my Python into Verilog code.  That Verilog code was synthesized and loaded onto an FPGA, and it worked.

The sin wave generator is valuable for what I’m building, something called a software defined radio.  I’ll get into what a software radio is in another post, but for right now, what you need to know is that radio modems are just like signal generators.  For example, a binary Frequency Shift Keying mode (xFSK) will use one tone to represent a zero, and another to represent a one.  For AFSK Bell 202, which is the particular mode used in the Amateur Radio Packet Service (APRS), the two tones are 1200Hz for a logic one, and 2200Hz for a logic zero.

Here’s the architecture of what we’re building today:

Lets look at the yellow outputs, all the way on the right.  As you can see, the DAC is what’s known as quadrature.  It outputs both the in phase (I), and quadrature phase (Q) of the signal; which is what the radio chip needs.  Using quadrature signals means that I can express any modulation technique from software.

The DAC takes as inputs: one 8-bit data line, and a clock line.  On the rising edge of the clock, whatever digital value is on the data input line is converted to an analog voltage on the in phase (I) output.  On the falling edge of the clock, whatever digital value is on the data input line is converted to an analog voltage on the quadrature phase output (Q).

Generating Samples in Python

So, how do you make a digital sine wave in an FPGA?  This is the job of the sine generator, the grey block above.  I’ve used an old standby for computer engineering - the Look Up Table (LUT) - to get this job done.  Here’s some plain old Python code to generate the look up table and make a nice graph of it.

from math import cos, ceil, pi
import matplotlib.pyplot as plt

# How many samples are in the look up table.  I arbitrarily chose this number.
NUM_SAMPLES = 100
# The look up table.  Note, that the output DAC expects numbers from 0 to 256,
# so the center of the waveform should be at 127.
# Also, the frange function is a quickie I found online.
SAMPLES = tuple([int(ceil(cos(i)*128)+127) for i in frange(0, 2*pi, step=(2*pi)/NUM_SAMPLES)])

# generate a plot of the look up table
fig = plt.figure()
ax = fix.add_subplot(111)
ax.plot(SAMPLES)
fig.show()

So there you have it.  We’ve used Python to create an  in phase (cosine) look up table, going from 255, to 0, then back to 255 in one hundred samples.

Clocks, Frequencies, Periods… Cycles!

So, how do you take 100 samples of a sine wave and play that at any frequency?  Well, the system clock is running at 10MHz, or a 100ns period.  We want 1.2kHz which is a fraction of the full system clock, and a much much longer period 833.3kns.  Thats k as in kilo-nano seconds!  So, all we have to do is hold each sample value for a number of system cycles, and the output would be a sine wave of the right frequency.  

# 10MHz system clock
SYSTEM_CLOCK_FREQ = 10e6
# Period in ns of the system clock
SYSTEM_CLOCK_PERIOD_IN_NS = int(1.0 / SYSTEM_CLOCK_FREQ * 1e9)
# Signal output frequency, 1200Hz is a mark, logic 1 in Bell 202 AFSK
SIGNAL_FREQ = 1200
# Period in ns of the signal
SIGNAL_PERIOD_IN_NS = int(1.0 / SIGNAL_FREQ * 1e9)

# How many system clock cycles to output each sample of the signal
SYSTEM_CLOCK_CYCLES_PER_SAMPLE = (SIGNAL_PERIOD_IN_NS / SYSTEM_CLOCK_PERIOD_IN_NS) / NUM_SAMPLES

For a 1.2kHz signal on a 10MHz clock, this works out to 83 cycles per sample of the 100 sample look up table.

Defining the Signal Generator in Python with MyHDL

Now that we have the look up table, and know how many cycles to hold each sample, lets turn to the signal generator module’s state machine:

def iq_sin_generator(resetn, system_clock, transmit_enable, select, dac_clock, SAMPLES):
    i_index = Signal(intbv(0, 0, NUM_SAMPLES))
    q_index = Signal(intbv(0, 0, NUM_SAMPLES))
    in_phase = Signal(bool(0))
    cycles_this_sample = Signal(intbv(0, 0, CYCLES_PER_SAMPLE))

We’ll track where we are in the look up table for the in phase and quadrature phase outputs, as well as which phase we’re currently outputting to the DAC.  The cycles_this_sample is a counter of how many system clock ticks have occurred on the current sample.

The look up table is recorded as combinatorial logic, which lines are always running.  MyHDL will later roll out the look up table into a big Verilog case statement.  Your FPGA ideally loads this LUT into a special purpose ROM storage on the device.

@always_comb
def rom():
    dac_data.next = SAMPLES[i_index if in_phase else q_index] 

For this particular instance, every time i_index, q_index, or in_phase changes; the dac_data line will change it’s output by using the look up table.

Those three dependent variables are changed by the state machine.  The state machine is the object that is changing over time.  In this case, time is reference to the system clock running at 10MHz.

The base case of the state machine is the reset logic, when resetn is low.  The enabled case is activated by asserting the transmit_en and select lines.  If not enabled, the machine does nothing.

@always(system_clock.posedge, resetn.negedge)
def state_machine():
    if not resetn:
        i_index.next = 0
        q_index.next = NUM_SAMPLES / 4 # Quadrature means 90 degrees of, or 1/4 wavelength
        in_phase.next = 0
        cycles_this_sample.next = 0
    elif transmit_enable and select:
        i_index.next = i_index
        q_index.next = q_index
        in_phase.next = in_phase
        
        if cycles_this_sample == CYCLES_PER_SAMPLE - 1:
            i_index.next = (i_index + 1) % NUM_SAMPLES if not in_phase else i_index
            q_index.next = (q_index + 1) % NUM_SAMPLES if in_phase else q_index
            in_phase.next = not in_phase
            dac_clock.next = not dac_clock
            cycles_this_sample.next = 0
        else:
            cycles_this_sample.next = cycles_this_sample + 1

The enabled state is the meat.  It holds the sample constant for a count of CYCLES_PER_SAMPLE; then cycles from in phase to quadrature phase readouts, as well as the dac_clock line.  As the state machine toggles the index and in_phase registers, the look up table is accessed and routed out to the dac_data lines.

Mocking the DAC in Python

Well, it sounds nice so far.  But we have to see it to believe it.  This is where Python and the MyHDL library really start to shine.  The next step is to run a test. Even better, we’re going to see the output on the DAC by writing an incredibly simple mock of the system.

def dac_mock(i_samples, q_samples, dac_data, dac_clock):
    @always(dac_clock.posedge)
    def snoop_i_dac_output():
        i_samples.append(int(dac_data))
    
    @always(dac_clock.negedge)
    def snoop_q_dac_output():
        q_samples.append(int(dac_data))

We can instantiate this mock in a simulation, and it will record a list of I and Q samples for further analysis. First lets create all the signals of the test setup:

from myhdl import Signal
resetn = Signal(bool(1))
system_clock = Signal(bool(0))
transmit_enable = Signal(bool(1))
select = Signal(bool(1))
dac_data = Signal(intbv(0, 0, 256))
dac_clock = Signal(bool(0))

Then we can instantiate the logic blocks and run a simulation:

from myhdl import Simulation
i_samples, q_samples = [], []
driver = clock_driver(system_clock)  # Omitted for brevity. Creates a 10MHz clock
machine = iq_sin_generator(resetn, system_clock, transmit_enable, select, dac_data, dac_clock)
mock = dac_mock(i_samples, q_samples, dac_data, dac_clock)

sim = Simulation(driver, machine, mock)
sim.run(0.010 * 1e9)  # run for 10ms

I’m going to generate a graph of the two phases:

fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(i_samples)
ax.plot(q_samples)
fig.show()

There you go, the DAC should be outputting correctly.  We could do Fourier Analysis on this and make sure the frequency is correct… an exercise left to the reader!

Generating Verilog with MyHDL

Now that we’re pretty sure the functional description is correct, how do you get it onto the FPGA?  Well, first you can use MyHDL to generate the Verilog:

from myhdl import toVerilog
toVerilog(iq_sin_generator, resetn, system_clock, transmit_enable, select, dac_data, dac_clock)
mock = dac_mock(i_samples, q_samples, dac_data, dac_clock)

Yes it was that easy. I took the output Verilog file, dropped it in my FPGA design, and captured the following image on the I line:

Notice the 1.2kHz sine wave.

In summary, we used Python to record some math into a look up table & visualize it, then coded a state machine to funnel the look up table into a DAC at a specified frequency.  The state machine was tested in Python with a mock DAC, and we visualized how the DAC will operate.  We then synthesized this state machine into an FPGA design and observed the output on an oscilloscope.

From Python to bare silicon, now that’s powerful stuff!

4:18 pm  •  17 November 2012  •  View comments

Today I want to talk about a sketch I’ve done of an AFSK modulator in an FPGA, which is the radio modulation used by the Amateur Radio Packet Service (APRS).  APRS is a really useful digital mode to bring up first in my radio, because so much data is shared on it.  If you want to see the data I’m talking about, head over to my friend’s site aprs.fi, where he collects and shows all the APRS data from all over the world on a Google Map.  APRS kinda reminds me of Twitter in a bunch of ways…

Anyways, lets check out the sketch:

The sketch is split into two parts, globals and the modulator.  The globals are available to every FPGA logic block, colored in grey.  reset_n is an active low reset signal.  The transmit_en signal goes high to turn on all of the modulator’s circuitry. This conserves power when the blocks are not in use.  Finally, the FPGA fabric is running on a system-wide 10MHz clock.

So, how does the AFSK modulator work?  The first logic block is a symbol streamer.   Imagine that this streamer is filled with a constant string, “hello, world!” encoded in ASCII with MSB first.  At the frequency of the baud clock, or 1200Hz, the streamer will step to the next symbol in the data sequence.

Each time a new symbol is output from the streamer block, one of the two sine signal generators is chosen.  For AFSK Bell 202, a 1200Hz tone represents a logic 1, while a 2200Hz tone represents a logic 0.  The demultiplexer takes the output data and sends it out to the DAC for encoding.

Everyone seems to freak out about doing FPGA design work, but the modulator’s architecture is actually quite simple.  The hardest part is the sine signal generators, and making sure that all of the clocks match up and you get the right bitrate coming out.  Both require just some understanding of simple mathematics - trigonometry being the main one.  The more I look at doing modulation and demodulation completely in the FPGA, the more I feel like it’s completely possible.

5:30 pm  •  16 November 2012  •  View comments

I have a special update for you all today.  We’re going to take the Siglent SDS1204CFL oscilloscope for a spin on my radio.  A special thanks goes out to Kendall Clark from Clark & Parsia for buying and lending this oscilloscope to me as I get closer to a fully working prototype of the radio.  If you have any interest in Semantic Web technologies, what I like to consider the grand-Papa of Big Data, you should definitely consult with these experts.  They can help you categorize, search, and learn from vast and diverse data sets.  Anyways, onto the scope porn!

It’s 200MHz, 2GSa/S, 4 channel scope.  Pictured above is the scope Auto tuned on the reference 10MHz square wave.  Siglent is the largest OEM manufacturer of oscilloscopes in the world, so I hope they don’t mind I re-branded it for my sponsor :)

 

Here’s what it looked like when I first got it.  It comes with 4 scopes, and a USB cable to connect to your computer.  This feature has been really handy, because I can get complex test setups of the radio going, and then take snapshots to remember what exactly happened in each instance.  So many numbers to remember!

Lets take some time and go through one sub circuit of the radio, the DAC. This circuit is the gateway to go from the digital world of the ARM and FPGA, and into the analog world of radio frequencies.

First, lets prove that the FPGA is under my control.  What’s shown in the following figure is the output of the DAC clock.  Since it’s a digital circuit, I used DC coupling with 1MOhm impedance on the oscilloscope channel.   This signal comes from the FPGA fabric, who’s clock running at 10MHz.  The display has a nice set of measurements.  One thing to note, my zero is floating a bit above ground… I’ll have to write another article about grounding in mixed signal systems as there’s so much there to learn!

Okay so the DAC clock seems to be humming along.  This means we can take a look at the output of the DAC and hopefully see a sawtooth wave, since that’s what I have the FPGA outputting right now.  The DAC that I chose has differential outputs, so I used AC coupling at 1MOhm to see what was happening here:

Well, my signal is there, but there are some noise in the signal, as you can see.  Again, mixed signal grounding may be the cause of this problem, or that I just need more filtering.  The scope has a built-in filter, so I tried adding in a 10MHz low-pass filter and things cleaned up a lot:

Nice.  So, I can try a number of tricks up my sleeve to clean out potential noise in the power, but I can always add a filter in the next rev of the circuit!

So, I hope you can now see why the oscilloscope is so useful while you’re working on electronics.  Without it, I’d have no idea whether or not anything worked.  One little step, day by day, and the radio is coming along.  Slow and steady now!

12:49 pm  •  14 November 2012  •  View comments

I keep asking myself, “why am I here???”  I have no perfect answer.  No provable dogma that will make my life happy and complete.  Chris Testa, the ego has so many desires.  Wishes, dreams.  They all seem to be nothing but whispers in the wind, yet they mean so much.  I look out upon thousands of trees, Joshua Trees, each their own little ecosystem.  Every meal worm dreams to live inside of a yucca’s fruit.  There it is free to grow and enjoy life’s bounty.

God chooses whether the meal worm gets its dream.  All it can do is try.  Give it its best.  You must BELIEVE in a higher purpose, in a divine will, even if it is contained within yourself.  Especially if it is contained within yourself.  God is everywhere, in everything.  When I think otherwise, life seems meaningless.  Time slips without purpose.

To hear god’s word you must only just listen.  And see, and accept.  And through me God will do great things.  I will see and feel and touch and taste.  It will be magnificent.

Sometimes I will forget to listen, but I should not get angry at myself at those times.  Forgiveness is God’s favorite pastime.

I dream to see the stars tonight.  It is windy; the sun will begin to set, and the desert will awaken.

11:44 am  •  28 June 2012  •  View comments

As promised, here’s a tutorial on how my new magik website works. I wanted to play around with the social APIs in a new and fun way, and I decided to try and make “one feed to rule them all”. I also challenged myself to do as much of the work in the client using JavaScript templates. Lets face it folks, JavaScript runs on most machines now adays including our smart phones, so it’s a great platform to develop on for a wide audience (Unless you use WebGL, which is just too new!)  My site is hosted on Heroku, but it is actually a 100% static site at the moment so I achieved my mission.

Here was my other goal: use my Twitter feed as the “root feed”, but render any known embedded links. As in, an Instagram photo should appear with the tweet, a Tumblr post should be fully embedded as well. I got to use the new(ish) Tweet Entities API to do all of the rendering on the client in JavaScript, which was a godsend. No more nasty regular expressions to find links inside of a Tweet. They do it all for you!

Lets run through the code and see some of the magic.  First up, getting my tweets from the Twitter API using JQuery:

var max_tweet_id = undefined;  // Used to perform infinite scrolling

// Called on document load and when the window is scrolled to the bottom.  Adds 10 tweets to the page.
function loadTweets() {
	var data = {
		screen_name: 'testa',
		include_rts: false,
		count: 10,
		include_entities: true,  // New, very nice API for extracting links embedded in tweets!
		exclude_replies: true
	};
	if (max_tweet_id)
		data['max_id'] = max_tweet_id;

	$.ajax({
	    url: 'http://api.twitter.com/1/statuses/user_timeline.json/',
	    type: 'GET',
	    dataType: 'jsonp',
	    data: data,
	    success: function(data, textStatus, xhr) {
		 // append tweets into page
		 for (var i = 0; i < data.length; ++i) {
		    var id = Number(data[i].id_str);
		    if (max_tweet_id === undefined || id < max_tweet_id) {
			max_tweet_id = id;
		    }
		    renderTweet(data[i]);
		 }
	    }
	});
}

I’m using Mustache.JS to do the rendering. Lets look at the renderTweet function which is straightforward. It just makes a rendering dictionary and applies it to a template:

var renderTweet = function(tweet) {
	var post_template = $('#post-template').html();
	var tweet_image_and_link = tweetImage(tweet);
	$('#posts').append(Mustache.to_html(post_template, {
		'PERMALINK': tweetPermalink(tweet),
		'TWEET_BODY': function () {
			return function(text, render) {
				return tweetText(tweet);
			};
		},
		'MEDIA_SRC': tweet_image_and_link[0],
		'MEDIA_URL': tweet_image_and_link[1],
		'USER': tweet.user.screen_name,
		'AGO': new Date(tweet.created_at).toRelativeTime(),
		'TWEET_ID': tweet.id_str
	}));
}

And now the Mustache template, again really straightforward. By the way, I’m using Twitter Bootstrap:

<script id="post-template" type="text/template">
	<div class="row">
		<div class="post">
			<div class="post-text">{{#TWEET_BODY}}tweet{{/TWEET_BODY}}</div>
			{{#MEDIA_URL}}<a href="{{MEDIA_URL}}"><img class="post-media-img" src="{{MEDIA_SRC}}" /></a>{{/MEDIA_URL}}
			<div class="post-ago"><a href="{{PERMALINK}}">{{AGO}}</a></div>
		</div>
	</div>
</script>

Now we’ll start exploring the JavaScript that works with the tweet entities API. Fist a couple of binary classifying functions to figure out what type of content is in the tweet. I’ll call these in a few places to figure out how to process the tweet:

// Media tweets are ones that you embed the photo directly in Twitter's clients
var isMediaTweet = function(tweet) {
	return 'media' in tweet['entities'];
}

// A linked tweet has an embedded url
var isLinkedTweet = function(tweet) {
	if ('urls' in tweet['entities']) {
		if (tweet['entities']['urls'].length > 0) {
			return true;
		}
	}
	return false;
}

// An Instagram tweet is one that has a link to the domain instagr.am
var isInstagramTweet = function(tweet) {
	if (isLinkedTweet(tweet)) {
		var url = tweet['entities']['urls'][0];
		if (url['display_url'].indexOf('instagr.am') == 0) {
			return true;
		}
	}
	return false;
}

Notice my liberal usage of the tweet['entities'] object. I can think of way better ways to write these conditionals with one line in Python, but (I think) I need to use something like underscore.js to do these conditional tests on a single line in JavaScript. Though if someone has a better technique please let me know. While I’m at it, who has a great way to write classes in JavaScript? There’s a million ways to do it and I can’t decide which syntax is the best.

And now the rendering functions in order of complexity. First, the permalink is a one liner:

var tweetPermalink = function(tweet) {
	return 'http://twitter.com/#!/testa/status/' + tweet.id_str;
}

Figuring out if an image should be embedded is not too bad either. This looks for media or Instagram tweets at the moment, but can easily be expanded to support new services. It returns two items, the image source URL as well as the deep link to the content:

// WARNING: I CHEATED - THIS ONLY WORKS FOR 1 ENTITY!!!
var tweetImage = function(tweet) {
	if (isInstagramTweet(tweet)) {
		var url = tweet['entities']['urls'][0];
		return [url['expanded_url'] + 'media', url['expanded_url']];
	} else if (isMediaTweet(tweet)) {
		var media = tweet['entities']['media'][0];
		return [media['media_url'], 'http://twitter.com/#!/testa/status/' + tweet.id_str];
	} 
	return [null, null];
}

And now the coup de grâce, the function that renders the tweet text. This is where the entities API really shines. The method pulls out any links that are rendered to separate the content from my voice. No regular expressions, just String.substr and String.substring For The Win!

// WARNING: I CHEATED - THIS ONLY WORKS FOR 1 ENTITY!!!
var tweetText = function(tweet) {
	var text = tweet.text;
	if (isMediaTweet(tweet)) {
		var media = tweet['entities']['media'][0];
		text = tweet.text.substr(0, media['indices'][0] - 1) +
			tweet.text.substr(media['indices'][1], tweet.length);
	} else if (isInstagramTweet(tweet)) {
		var media = tweet['entities']['urls'][0];
		text = tweet.text.substr(0, media['indices'][0] - 1) +
			tweet.text.substr(media['indices'][1], tweet.length);
	} else if (isLinkedTweet(tweet)) {
		var url = tweet['entities']['urls'][0];
		text = tweet.text.substring(0, url['indices'][0]) +
			'' + url['display_url'] + '' +
			tweet.text.substring(url['indices'][1]);
	}
	return text;
}

So there you have it. A nice client-side rendering of the Twitter, Instagram, and Tumblr APIs in JavaScript. But I’m already annoyed - when I post to Instagram, it flows to Twitter, and then to Facebook, but Facebook doesn’t resolve the t.co address, so my Instagrams are not nicely embedded on Facebook. The solution for that will have to wait for another blog post and a little bit of Heroku worker magic. Until next time, hope you enjoyed this article!

3:10 pm  •  26 April 2012  •  View comments

The past few days I’ve been geeking out on a particular type of nanotechnology - the manufacturing and real-world usage of carbon allotropes, nano-scale organic compounds.  Namely diamonds, graphene, and carbon nanotubes (CNT) which all promise exciting applications replacing silicon in electronic devices.  While silicon-based electronics are toxic to humans, these new carbon-based electronics are theoretically safe for human contact, which has the biolological and medical fields dreaming of machines that fight along our white bloodcells.

The first “ah-hah” to make about nanotechnology is to realize that on the nano-scale, quantum effects begin to become more forceful than our better understood macro forces like electromagnetism and gravity.  Our quest for smaller, faster, and more efficient technologies drive us to play with God’s dice, as Einstein referred to quantum mechanics.  But how do we act on things on the nano-scale, directly manipulating atomic structures?
 
Segmented allotropes and given off wavelengthsAmazingly, man has been synthesizing a wide range of nano-scale materials since Prometheus taught us how to control fire.  Light a candle and trap the burned exhaust above the flame, and you’ve collected a wide assortment of carbon allotropes.  Protein processing techniques can then be used to segment these byproducts, called nanoflorescents, by mass and charge.  Different types give off different colors when UV radiation is applied (image of segmented nanostructures and the light emitted frequencies on the right). [1]  But what about the production of homogeneous nanostructues, like Single Walled CNTs or macro-scale diamonds?  Larger scale production of pure allotropes is becoming easier since chemical vapor deposition started to be used in 1993, and efficiency gains in production continue. [2]

Spinning CNT YarnCarbon Vapor Deposition (CVD) is akin to a candle fire, but replace the solid, organic matter in the candle with hydrocarbon gases.  By dropping the pressure very low and heating hydrocarbon gases, precise nanostructures can be synthesized.  These processes have been used for metal plating for a number of years, and industrial use “recipes” are typically very closely guarded secrets.  A recent patent has been granted for synthesizing pure, single-walled CNT forests using CVD, and turning it into yarn suitable for our traditional textile process (image on the left). [3]

CVD DiamondCVD has even lead to the synthesis of pure diamond blocks, developed by Apollo Diamond Inc and purchased by Scio Diamond Technology this year (there’s one on that guy’s finger!) [4]  A whole new world of carbon-based processors are on the horizon, which promise another breakthrough in processor frequencies (30GHz processors please!)  As you could guess, DeBeers is rumored to be quite anxious about this development, and developing techniques to recognize flawless aka “counterfeit” diamonds for jewelers.  That’s fine by me, I want to use diamonds to improve technology and society, not amass blood money.

I’ve heard before that we are about to enter a “diamond age” and now I can see it on the horizon.  In the coming years we will learn how to more completely exploit quantum processes on the nanoscale, and nano-organics light an exciting path for technology’s evolution.

[1] http://bznotes.wordpress.com/2007/09/10/carbon-nanoparticles-by-candle-light/
[2] http://www.uc.edu/News/NR.aspx?ID=5700
[3] http://adsabs.harvard.edu/abs/2003Natur.423..703D
[4] http://www.jckonline.com/2011/09/27/new-company-plans-to-produce-synthetic-diamonds

4:53 pm  •  27 November 2011  •  View comments

Google uses a process oriented infrastructure for its cloud. That’s why I love Heroku’s simple Procfile system. It’s easy to wrap your head around, and makes scaling simple.

I like to control my machines directly though, so I’ve been using foreman to develop and deploy my personal services lately. Foreman either runs as the process runner which is great for development, or it let’s you run with the Upstart service which is great for production. I decided to write a copy of foreman’s custom runner in Python, which is under 100 lines and has zero dependencies.

Future plans would be to use yaml outright and support more of the borgconfig & borgmon features that make Google’s cloud service so indispensable.  Check out the gist!

9:14 am  •  14 November 2011  •  View comments

blog comments powered by Disqus