Saturday, May 18, 2013

My FPGA design skills are a little rustier than I thought

Today I'm going to Makerfaire in the Bay Area. I'd had an idea percolating in my head to use an FPGA to implement fixed-point equivalents of the analog music synthesizer modules of the 1970s, and gave myself a couple of weeks to design and build a simple synthesizer. I'd been a synthesizer enthusiast in high school and college, having attended high school with the late David Hillel Wilson and had many interesting discussions with him about circuit design for synthesizers, a passion he shared with his father. While he taught me what he knew about synthesizers, I taught him what I knew about electronics, and we both benefitted.

Now I have to confess that since my switch to software engineering in the mid-90s, I haven't really done that much with FPGAs, but I've fooled around a couple of times with Xilinx's ISE WebPack software and stumbled across MyHDL, which dovetailed nicely with my long-standing interest in Python. So I ordered a Papilio board and started coding up Python which would be translated into Verilog. My humble efforts appear on Github.
There was a lot of furious activity over the two weeks before Makerfaire, which I hoped would produce something of interest, and I learned some new things, like about delta-sigma DACs. Being an impatient reader, I designed the delta-sigma DAC myself from scratch, and ended up diverging from how it's usually done. My design maintains a register with an estimate of the capacitor voltage on the RC lowpass driven by the output bit, and updates that register (requiring a multiplier because of the exp(-dt/RC) term) as it supplies bits. It works, but has a failure mode of generating small audible high frequency artifacts particularly when the output voltage is close to minimum or maximum. On the long boring flight out, I had plenty of time to think about that failure mode, and it seems to me the classic delta-sigma design would almost certainly suffer from it too. I think it could be reduced by injecting noise, breaking up the repetitive patterns that appear in the bitstream.

I like Python a lot but I'm not sure I'm going to stay with the MyHDL approach. As I learn a little more about Verilog, it seems like a probably better idea to design directly in Verilog. The language doesn't look that difficult, as I study MyHDL's output, and while books on Verilog tend toward expensive, some of them are more affordable. Those books are on the Kindle, and a couple others are affordable in paper form.

MyHDL-translated designs do not implement Verilog modularity well, and I think it would be good to build up a library of Verilog modules in which I have high confidence. MyHDL's simulation doesn't always completely agree with what the Xilinx chip will do. And while MyHDL.org talks a lot about how great it is to write tests in Python, the Verilog language also provides substantial support for testing. Verilog supports signed integers, but as far I've seen, MyHDL doesn't (this is INCORRECT, please see addendum below), and for the fixed-point math in the synth modules, that alone would have steered me toward straight Verilog a lot sooner had I been aware of it.

It appears the world of Verilog is much bigger and much more interesting than I'd originally thought. I've started to take a look at GPL Cver, a Verilog interpreter that (I think) has debugger-like functions of setting breakpoints and single-stepping your design. I had been thinking about what features I'd put into a Verilog interpreter if I were writing one, and a little googling showed me that such a thing already existed. So I look forward to tinkering with CVer when I get home from Makerfaire.

EDIT: Many thanks to Jan Decaluwe, the developer of MyHDL, for taking the time to personally respond to the challenges I encountered with it. Having had a couple of days to relax after the hustle and bustle of Makerfaire, and get over the disappointment of not getting my little gadget working in time, I can see that I was working in haste and neglected to give MyHDL the full credit it deserves. At the very least it explores territory that is largely uncharted, bringing modern software engineering to the HDL world where (like all those computational chemists still running Fortran code) things have tended to lag behind the times a bit.

In my haste, I neglected the documentation specifically addressing signed arithmetic in MyHDL. I didn't take the time to read the docs carefully. As Jan points out in his writings and in the comment to this blog, MyHDL's approach to signed arithmetic is in fact simpler and more consistent than that of Verilog. What does signed arithmetic look like in MyHDL? It looks like this.

    # INCORRECT
    >>> x = Signal(intbv(0)[8:])
    >>> x.next = -1
    Traceback (most recent call last):
        ...blah blah blah...
    ValueError: intbv value -1 < minimum 0

    # CORRECT, range is from min to max-1 inclusive
    >>> x = Signal(intbv(0, min=-128, max=128))
    >>> x.next = -1      # happy as a clam

In the case where MyHDL's behavior appeared to diverge from that of the physical FPGA, my numerically-controlled amplifier circuit above uses one of the hardware multipliers in the XC3S500E, which multiplies two 18-bit unsigned numbers to produce a 36-bit unsigned product. When my music synthesizer was at one point unable to make any sound, I tracked it down to the amplifier circuit, which was working fine in simulation. There was already a hardware multiplier working in the delta-sigma DAC. I poked at things with a scope probe, and scratched my head and studied my code and studied other peoples' code and ultimately determined that I needed to latch the factors in registers just prior to the multiplier. Whether it's exactly that, I still can't say, but finally the amp circuit worked correctly.

I wrongly concluded that it indicated some fault in MyHDL's veracity as a simulator. If it didn't work in the chip, it shouldn't have worked in simulation. But with more careful thought I can see that it's really an idiosyncrasy of the FPGA itself, or perhaps the ISE Webpack software. I would expect to run into the same issue if I'd been writing in Verilog. I might have seen it coming if I'd done post-layout simulation in Webpack, and I should probably look at doing that. Once the bits are actually inside the chip, you can only see the ones that appear on I/O pins.

2 comments:

Jan Decaluwe said...

1. When MyHDL simulation doesn't match implementation, that means there's bug in the convertor. Let us know.

2. Signed arithmetic is one of MyHDL's strong points, because it's transparent: just use negative values for the min value of intbv's. In contrast, Verilog is totally confusing because it casts everything to unsigned in a mixed expression. The MyHDL convertor does all the required castings and type conversions for you.

Will Ware said...

Hi Jan, thanks very much for your response. I was perhaps hasty in some of the things I wrote and may have neglected to give MyHDL the full credit it deserves. I will review my conclusions and update this posting accordingly.

I'm looking at section 6.3.10 describing signed arithmetic support in MyHDL, which says that one simply assigns a negative value to an intbv object. I thought that's what I had done, and my recollection was that it raised an exception. But I should go back and look at that more closely. I was working in a rush, and probably being a little careless about the conclusions I reached.

I'll plan to do some more work with MyHDL and see if I can reproduce the case where it appeared to disagree with the FPGA. That may have been another artifact of my haste.

Apologies if my writing (which I usually assume hardly anybody is reading) has created any bad impressions for your work. One thing I've seen in the world of Verilog is that in many ways it trails far behind the rest of software engineering, and you are one of the few people doing anything to address that.