Friday, February 26, 2016

SPI Communications with the Arduino Uno and M93C46 EEPROM: Easy, Fun, Relaxing

When I write code for an embedded microprocessor, I frequently need to use communications protocols that allow the micro to communicate with other chips. Often there are peripherals built in to the micro that will handle the bulk of the work for me, freeing up micro clock cycles and allowing me to write fewer lines of code. Indeed, the bulk of modern microcontroller datasheets is usually devoted to explaining these peripherals. So, if you aren't trying to do anything unusual, your micro may have a peripheral that will do most of the work for you. There might be a pre-existing driver library you can use to drive the peripheral. But, sometimes, you don't have a peripheral, or it won't do just what you need it to do, for one reason or another In that case, or if you just want to learn how the protocols work, you can probably seize control of the GPIO pins and implement the protocol yourself.

That's what I will do, in the example below. I will show you how to implement the SPI (Serial Peripheral Interface) protocol, for communicating with an EEPROM. I've used SPI communication in a number of projects on a number of microcontrollers now. The basics are the same, but there are always issues to resolve. The SPI standard is entertaining keeps you on your toes, precisely because it is so non-standard; just about every vendor extends or varies the standard a bit.

The basics of SPI are pretty simple. There are four signals: chip select, clock, incoming data, and outgoing data. The protocol is asymmetrical; the microcontroller is usually the master, and other chips on the board are slaves -- although it would be possible for the micro to act as a slave, too. The asymmetry is because the master drives the chip select and clock. In a basic SPI setup, the slaves don't drive these signals; the slave only drives one data line. I'll be showing you how to implement the master's side of the conversation.

Chip select, sometimes known as slave select from the perspective of the slave chip, is a signal from the master to the slave chip. This signal cues the slave chip, informing the chip that it is now "on stage," ready for its close-up, and it should get ready to communicate. Whether the chip select is active high, or active low, varies. Chip select can sometimes be used for some extra signalling, but in the basic use case the micro set the chip select to the logically active state, then after a short delay, starts the clock, runs the clock for a while as it sets and reads the data signals, stops the clock, waits a bit, and turns off the chip select.

Here's a picture showing the relationship between clock and chip select, as generated by my code. Note that I have offset the two signals slightly in the vertical direction, so that it is easier to see them:

The clock signal is usually simple. The only common question is whether the clock is high when idle, or low when idle. Clock speeds can vary widely. Speeds of 2 to 10 MHz are common. Often you can clock a part much slower, though. CMOS parts can be clocked at an arbitrarily slow speed; you can even stop the clock in the middle of a transfer, and it will wait patiently.

What is less simple is the number of clocks used in a transaction. That can become very complex. Some parts use consistent transfer lengths, where for each transaction, they expect the same number of clock cycles. Other parts might use different numbers of clock cycles for different types of commands.

From the perspective of the slave, the incoming data arrives on a pin that is often known, from the perspective of the microcontroller, as MOSI (master out, slave in). This is again a simple digital signal, but the exact way it is interpreted can vary. Essentially, one of the possible clock transitions tells the slave to read the data. For example, if the clock normally idles low, a rising clock edge might signal the slave to read the data. For reliability, it is very important that the master and slave are in agreement about which edge triggers the read. Above all, you want to avoid the case where the slave tries to read the incoming data line on the wrong edge, the edge when the master is allowed to change it. If that happens, communication might seem to work, but it works only accidentally, because the slave just happens to catch the data line slightly after it has changed, and it may fail when the hardware parameters change slightly, such as when running at a higher temperature.

Let me be as clear as I can: when implementing communication using SPI, be certain you are very clear about the idle state of the clock line, and which clock transition will trigger the slave to read the data line. Then, make sure you only change the data line on the opposite transition.

Terminology surrounding SPI transactions can be very confusing. According to Wikipedia and Byte Paradigm, polarity zero means the clock is zero (low) when inactive; polarity one means the clock is one (high) when inactive.

Phase zero means the slave reads the data on the leading edge, and the master can change the value on the trailing edge, while phase one means the slave reads the data line on the rising edge, and the master changes the data line on the falling edge).

But some Atmel documentation (like this application note PDF file) uses the opposite meaning for "phase," where phase one means the slave reads data on the leading edge.

Because of this confusion, in my view it is best not to specify a SPI implementation by specifying "polarity" and "phase." So what would be clearer?

Aardvark tools use the terms "rising/falling" or "falling/rising" to describe the clock behavior, and "sample/setup" or "setup/sample" to indicate the sampling behaviors. I find this to be less ambiguous. If the clock is "rising/falling," it means that the clock is low when idle, and rises and then falls for each pulse. If the "sample" comes first, it means that the slave should read the data line on the leading edge, and if the "setup" comes first, it means that the slave should read the data on the trailing edge.

Here's a picture of my clock signal along with my MOSI (master out, slave in) signal. This SPI communication variant is "rising/falling" and "sample/setup." In order to allow the slave to read a valid bit on the leading clock edge, my code sets the MOSI line to its initial state before the rising edge of the first clock pulse. Again, I have offset the signals slightly in the vertical direction, so that it is easier to see them:

In the screen shot above, the master is sending nine bits: 100110000. Each bit is sampled on the rising clock edge. On the first rising clock edge, the MOSI line (in blue) is high. On the second rising clock edge, the MOSI line is low.

From the perspective of the slave, the outgoing data is sent on a pin that is often known as MISO (master in, slave out). This works in a similar way as the incoming data, except that the slave asserts the line.

When the master sends data to the slave, the master turns on the chip select (whether that means setting it low, or setting it high), changes the MOSI line and clock as needed, and then turns off the chip select.

When the master receives data from the slave, the behavior is slightly more confusing. To get data from the slave, the master has to generate clock cycles. This means that it is also sending something, depending on how it has set the MOSI line. During the read operation, what it is sending may consist of "I don't care" bits that the slave will not read. Receiving data can sometimes require one transaction to prepare the slave for the read operation, and then another to "clock in" the data. Sometimes a receive operation may be done as one transaction, but with two parts: the master sends a few bits indicating a read command, and then continues to send clock cycles while reading the slave's data line. Sometimes there are dummy bits or extra clock cycles in between the parts of this transaction.

Here's a picture that shows a read operation. I'm showing clock and MISO (mmmm... miso!) This shows a long transaction where the master sends a request (the MOSI line is not shown in this picture) and then continues to generate clock pulses while the slave toggles the MISO line to provide the requested data.

Now let's look at my hardware and software. I wrote some code to allow an Arduino Uno to communicate with a serial EEPROM chip. The chip in question is a M93C46 part. This is a 1Kb (one kilobit, or 1024 bits) chip. The parts are widely available from different vendors. I have a few different through-hole versions that I got from various eBay sellers; in testing them, they all worked fine. The datasheet I used for reference is from the ST Microelectronics version of the part.

These parts all seem to have similar pinouts. Pin 1 is the chip select, called slave select in the STM documentation. Pin 2 is the clock. Pins 3 and 4 are data pins. On the other side of the chip, there is a pin for +5V or +3.3V, a pin for ground, an unused pin presumably used by the manufacturer for testing, and a pin identified as ORG (organization), which determines whether the data on the chip is organized into 64 16-bit words, or 128 8-bit bytes.

There are other versions of this chip; the 1Kb is only one version. The command set differs slightly between sizes, but it should be pretty easy to adapt my example to a different-sized part. A full driver would be configurable to handle different memory sizes. It would not be hard to implement that, but for this example I am keeping things simple.

Here's my simple circuit, on a prototype shield mounted to an Arduino Uno:

Here's a simple schematic showing the Arduino pins connected to the EEPROM chip:

I'm not much of an electrical engineer, but that should convey that pin 1, usually marked with a little dot or tab on the chip, is on the lower right. We count pins counter-clockwise around the chip. So pin 5 goes to ground (I used the ground next to the data pins; that is the green wire going across the board). Make sure you are careful to connect the right pins to power and ground, or you can let the magic smoke one of these little EEPROM chips, and maybe disable your Arduino board, too, perhaps permanently (you'll never guess how I know this!)

I also have three LEDs connected to three more pins, connected through 220 ohm resistors, with the negative side of the LEDs going to a ground pin on the left side of the prototype board. Those are not required; they are there solely to create a simple busy/pass/fail display. You can use the serial monitor, if the Arduino is attached to your computer, or whatever other debugging method is your favorite.

I have done this kind of debugging with elaborate, expensive scopes that have many inputs and will decode SPI at full speed. That is very nice, but you don't necessarily need all for a simple project like this. I got this project working using a Rigol two-channel scope. I was not able to capture a trace of all our lines at once using this scope, but I didn't need to. With two channels, I could confirm that the chip select and clock were changing correctly with respect to each other. Then I could look at the MOSI along with the clock and verify that the data was changing on the expected clock transition. Then I could look at the MISO along with the clock to verify the bits the Arduino was getting back from the serial EEPROM chip. Here's my modest setup, using a separate breadboard rather than a shield:

Here's a view of a SPI conversation with the EEPROM chip: a write operation, followed by a read operation to verify that I can get back what I just wrote. This shows clock and MOSI, so we don't see the slave's response, but you can see that the second burst has a number of clock cycles where the master is not changing the data line. Those are "don't care" cycles where the master is listening to what the slave is saying. Note also that I am running this conversation at a very slow clock speed; each transition is 1 millisecond apart, which means that my clock is running at 500 Hertz (not MHz or even KHz). I could certainly run it faster, but this makes it easy to see what is happening, if I toggle an LED along with the chip select to show me when the master is busy.

Now, here's some code.

You don't have to use these pins, but these are the ones I used.

#define SLAVESELECT 10 /* SS   */
#define SPICLOCK    11 /* SCK  */
#define DATAOUT     12 /* MOSI */
#define DATAIN      13 /* MISO */

Here's a "template" 32-bit word that holds a 16-bit write command.

#define CMD_16_WRITE ( 5UL  << 22 )
#define CMD_16_WRITE_NUM_BITS ( 25 )

This defines a 25-bit command. There is a start bit, a 2-bit opcode, a six-bit address (for selecting addresses 0 through 63), and 16 data bits.

To use this template to assemble a write command, there's a little helper function:

uint32_t assemble_CMD_16_WRITE( uint8_t addr, uint16_t val )
    return ( uint32_t )CMD_16_WRITE   |
           ( ( uint32_t )addr << 16 ) |
             ( uint32_t )val;

Now we need a function that will send that command. First, let's start with a function that will send out a sequence of bits, without worrying about the chip select and final state of the clock.

void write_bit_series( uint32_t bits, uint8_t num_bits_to_send )
    uint8_t num_bits_sent;

    for ( num_bits_sent = 0; num_bits_sent < num_bits_to_send;
          num_bits_sent += 1 )
        digitalWrite( SPICLOCK, LOW );
        digitalWrite( DATAOUT, bits & ( 1UL <<
            ( num_bits_to_send - num_bits_sent - 1 ) ) ? HIGH : LOW );
        digitalWrite( SPICLOCK, HIGH );



This maps the bits to the DATAOUT (or MISO) line. We change the data line on the falling edge of the clock. We aren't using a peripheral to handle the SPI data; we just "bit bang" the outputs using a fixed delay.

Here's a function that will send a command that is passed to it. It works for write commands:

void write_cmd( uint32_t bits, uint8_t num_bits_to_send )
    digitalWrite( SLAVESELECT, HIGH );

    write_bit_series( bits, num_bits_to_send );

        Leave the data and clock lines low after the last bit sent
    digitalWrite( DATAOUT, LOW );
    digitalWrite( SPICLOCK, LOW );

    digitalWrite( SLAVESELECT, LOW );


That's really all you need to send out a command. For example, you could send a write command like this:

write_cmd( assemble_CMD_16_WRITE( addr, write_val ), CMD_16_WRITE_NUM_BITS );

Note that before you can write successfully, you have to set the write enable. My code shows how to do that. Basically, you just define another command:

#define CMD_16_WEN   ( 19UL <<  4 )
#define CMD_16_WEN_NUM_BITS   (  9 )

write_cmd( ( uint16_t )CMD_16_WEN, CMD_16_WEN_NUM_BITS );

This EEPROM chip will erase each byte or word as part of a write operation, so you don't need to perform a separate erase. That may not be true of all EEPROM chips.

To read the data back, we need a slightly more complex procedure. Our read command uses the write_bit_series function to send out the first part of the read command, then starts clocking out "don't care" bits and reading the value of the MOSI line:

uint16_t read_16( uint8_t addr )
    uint8_t num_bits_to_read = 16;
    uint16_t in_bits = 0;
    uint32_t out_bits = assemble_CMD_16_READ( addr );

    digitalWrite( SLAVESELECT, HIGH );

        Write out the read command and address
    write_bit_series( out_bits, CMD_16_READ_NUM_BITS );

        Insert an extra clock to handle the incoming dummy zero bit
    digitalWrite( DATAOUT, LOW );

    digitalWrite( SPICLOCK, LOW );
    delay( 1 );

    digitalWrite( SPICLOCK, HIGH );
    delay( 1 );

        Now read 16 bits by clocking. Leave the outgoing data line low.
        The incoming data line should change on the rising edge of the
        clock, so read it on the falling edge.
    for ( ; num_bits_to_read > 0; num_bits_to_read -= 1 )
        digitalWrite( SPICLOCK, LOW );
        uint16_t in_bit = ( ( HIGH == digitalRead( DATAIN ) ) ? 1UL : 0UL );
        in_bits |= ( in_bit << ( num_bits_to_read - 1 ) );

        digitalWrite( SPICLOCK, HIGH );

        Leave the data and clock lines low after the last bit sent
    digitalWrite( DATAOUT, LOW );
    digitalWrite( SPICLOCK, LOW );

    digitalWrite( SLAVESELECT, LOW );

    return in_bits;


And that's the basics. To test this, I put an EEPROM chip on a breadboard and just wired up the pins as specified in the code. Check your datasheet to determine if you can power the part with 5V or 3V. The chips I got seem to work fine with either, although if you are testing with a scope, you might want to use 5V so that the data out you get back from the chip has the same level as the 5V Arduino outputs.

You can find the full sketch on GitHub here.

Good luck, and if you found this useful, let me know by posting a comment. Comments are moderated, so they will not show up immediately, but I will post all (non-abusive, non-spam) comments. Thanks for reading!

Friday, January 01, 2016

Star Wars: The Force Awakens

This review contains many spoilers.

I want to start out by saying that I really was expecting, even hoping, to dislike The Force Awakens.

Entering the theater a cynical, somewhat bitter middle-aged man, I fully expected to be able to take my distaste for the other work of J. J. Abrams (particularly, his atrocious 2009 Star Trek reboot), and Disney, and recycled nostalgia in general, and throw it directly at the screen.

An original fan of Star Wars -- I saw the first one perhaps a dozen times in the theater -- and pretty much agree with the critical consensus about the prequels. Their utter failure led me to believe that the things I loved most about Episode IV had, for the most part, little to do with big-budget filmmaking, but were the result of giving a bunch of really brilliant costume and set designers and cinematographers and editors and sound designers a lot of creative control and a relatively low budget -- a situation unlikely to be replicated in a truly big film, an important investment where none of the investing parties would want to take any significant risks.

I was wrong, and I'm still somewhat troubled by that. Is The Force Awakens a good movie, or was I just primed by my age and the bad experience with the prequels to suck up something relatively bad and call it good, simply because it lacks the awfulness of the prequels, and smells a lot like the 1977 original? I don't think I can actually answer that question, definitively, at least not easily because really taking that up requires me to think critically about the original 1977 Star Wars, something I find hard to do, given the way the film imprinted itself upon my nine-year-old brain. Is it really all that and a bag of chips? Or did it just land at the right time to be the formative movie of my childhood?

One of my sons is nine, by the way. He enjoyed the new movie, but I don't think it blew his mind the way the original Star Wars blew mine, simply because we have, ever since 1977, lived in the era that had Star Wars in it.

To be clear -- it's not the case that there weren't big action movies back then, and big science fiction movies back then. We had movies like 2001: a Space Odyssey, which also formed my tastes. We had Silent Running. We had Logan's Run. But it would be impossible to overstate the shock wave of Star Wars -- the innovative effects, editing, and yes, even marketing. We just can't go back to that world. He's seen a lot of things that are Star Wars-ish, while in 1977, I never had.

And make no mistake, the new Star Wars is, most definitely, Star Wars-ish, in the way that the prequels were not. The world of the prequels was too clean, to sterile, too political, and too comic. Star Wars may have been the single most successful blending of genres ever attempted; a recent article called it "postmodern," and I think that is correct. The prequels might have been attempts at post-modern, too, but they seem to have a different set of influences, and just seem, in every respect, to have been assembled lazily, and without artfulness. For just one example, see how one of the prequel lightsaber battle scenes was actually filmed.

The Force Awakens follows the 1977 formula so closely that it is perilously close to coming across as a kind of remake or pastiche of the original. But it is not that. It is actually an homage to the original. There are a lot of parallel details and actual "easter eggs," where props make cameos, audio clips from the original movies are sprinkled into the new one. In one of my favorite moments, on Starkiller base we hear a clip from the first movie, "we think they may be splitting up." Some reviewers have made their reviews catalogs of these moments, and consider this excessive, complaining about the "nostalgia overload." But although it is noticeable, I think the producers knew just how much nostalgia would be appreciated, and how much would become annoying, and walked that line very well. The film re-creates the world where Han Solo and Leia Organa will not look out of place. And when Harrison Ford and Carrie Fisher actually appear on screen, the weight of almost 40 years suddenly lands on me, and it's a gut punch. I must have gotten something in my eye.

While Solo is an important character in this film, Leia has very few scenes, and the action centers largely around new characters. The casting is what you might call modern. No one could claim that Rey is not a strong, compelling female character. Daisy Ridley's acting in this movie is very impressive. Without her strong performance, we'd be prone to spend time musing on the oddness of her almost-absent back-story. As it is, we aren't really given a lot of time to meditate on such things, because she keeps very busy, kicking ass and taking names.

John Boyega as Finn is good too, although he doesn't seem, to me, to quite inhabit his character the way Ridley inhabits Rey. And so I find myself spending a little time wondering about the gaps and inconsistencies in his character's back-story. He describes himself as a sanitation worker. If that's true, why is he on the landing craft in the movie's opening scenes, part of the First Order interplanetary SWAT team sent to Jakku to retrieve the MacGuffin? He's supposedly a low-level worker on Starkiller base, but he knows how to disable the shields? He's a stormtrooper, trained since birth to kill, but unable to kill. Has he never been "blooded" before? We're unfortunately reminded that this doesn't actually make a lot of sense. Of course this is true of many elements of the original trilogy. The key to making that kind of thing not matter, for a Star Wars movie, is to keep everything happening so fast that the audience doesn't have time to worry about all that.

The story moves along quickly and we meet one of the most interesting characters, Kylo Ren, played by Adam Driver. Driver plays an adolescent, and puts Hayden Christiansen's portrayal of Anakin to utter shame -- although one senses that much of Christiansen's failure may have been due to Lucas's poor direction of the young actor. Driver is completely compelling on-screen, and his scenes with Ridley are just mesmerizing. I really can't say enough good things about them. I've seen two screenings now, and I would happily see it again, just to watch those two characters interact. It's really impressive.

That's really enough to hang a movie on -- a few really great performances, a few good performances, some terrific scenes, and no scenes that are actually bad. (Howard Hawks famously said that to make a good movie, you needed three good scenes and no bad ones; The Force Awakens exceeds that requirement).

Of course, there are a lot of confusing, unconvincing, and unwieldy things about this film. For example, Rey is much stronger in the ways of the force, and a very powerful fighter, right off the bat. She's grown up on Jakku, apparently spending years alone, and entirely untrained, while in the original trilogy we watched Luke start off with some talent for using the Force, but not much skill, and get trained up like Rocky Balboa. How did this come to pass? Well, it's a mystery we just have to accept for now. Maybe she had a lot of karate classes as a very young child. I maintain that when a movie likes this leaves things unexplained, the audience will do the work for the screenwriters and make it work -- if the audience has decided to side with the movie and help it along. And if they haven't, no amount of rationalization will explain away the inevitable plot holes in a satisfying way. This movie has done such a good job at entertaining the audience, and introducing a compelling character early on, that we as the audience are pretty happy to go along, and willing to make a few allowances and give it the benefit of the doubt. With the prequels, we were bored and full of doubt, for good reasons.

There are a few flaws that I think are worth noting. The movie is just slightly too long. The reawakening of Artoo-Detoo, just after the destruction of the big bad Starkiller base, allowing the plot to continue with a literal deus ex machina -- is just slightly too silly.

What is up with Kylo Ren's helmet, and Captain Phasma's helmet? One of the notable things about the Empire was the extreme precision and cleanliness of the costumes, including the stormtrooper helmets and Darth Vader's helmets. But in the new movie, Ren's helmet is dinged and dented, with chipped paint, and Phasma's helmet is covered in fingerprints. It's not accidental; even the action figures of Kylo Ren have molded-in dents, and there is no way that someone simply forgot to polish Phasma's helmet; such an error would certainly be caught. They were made to look that way deliberately, in stark contrast to the other uniforms and suits of armor. Why is that?

There are some scenes with the Resistance, preparing X-wing fighters, that look like they were literally shot on the site of a freeway overpass; that reminded me of the way J. J. Abrams decided it was a good idea to use a brewery for the engine room of the Enterprise -- an incredibly dumb, unconvincing, revisionist look for the Engineering set. The Imperial wreckage on Jakku -- both Imperial Star Destroyers and the walkers from the invasion of Hoth in Empire -- is nostalgic, but bizarre.

There are some coincidences that feel just a little too coincidental. How did Luke's lightsaber wind up in Maz's basement, in an unlocked trunk, in an otherwise empty room?

Starkiller Base makes very little sense; the physics of it just don't work, in any reasonable universe. The Resistance leaders explain that it sucks up "the sun" -- not "the nearest star" -- in a galaxy with billions of suns, in a film set on multiple planets, around multiple stars, the producers apparently don't trust the audience to understand how stars and planets work; don't confuse them!

But none of this is really a deal-breaker, because the movie moves so fast, and is so willing to break things. Which brings me to the biggest spoiler of all.

The movie kill Han Solo. Yes, they went there. It was at that moment that the film won me over completely. It was a brave move, and it needed to happen. The screenwriters, including Lawrence Kasdan, who worked on Empire, knew very well that if the audience was to take this movie seriously, it would need to show them that it was serious. That's what the death of Han Solo means. Harrison Ford -- who, by the way, is excellent in this film -- has a terrific death. This is also the reason that, for episode IX to work, the screenwriters will have to kill another major character -- most likely, General Leia -- in the first ten minutes.

Given the impressive start to that trilogy, I believe they will do the right thing -- and it will be glorious. And we'll regard the prequels as an unfortunate, non-canon interlude, a mere glitch, in the continuity of the Star Wars story -- and Lucas will continue his slide into irrelevant lunacy.

And meanwhile, as I approach fifty, I still have to wonder. What was the point of Star Wars? Was it ever anything resembling a genuine artistic statement, or was it always a coldly calculated money-grabbing machine, powered by myth, in which Lucas figured out how to monetize the scholarship of Joseph Campbell? Was Star Wars ever actually about anything? Was it "real" art, more than a dizzying whirlwind of entertainment, built on genre tropes and with very little in it that was groundbreaking but the improved technology of movie-making?

Was I simply bamboozled, as a child, into imagining that I was seeing a piece of art, something meaningful? If so, does it matter? Is that dizzying whirlwind of entertainment, blended with a calculated human story arc, really enough? Can real art ever be made out of genre fiction? How about Tolkien? What about smashed-together, postmodern genre fiction? Is it just screenwriting that somehow loses the status of "art?" If I enjoy both Moby Dick and Star Wars, is there something wrong with me?

And, if these distinctions don't matter, and the Disney corporation buys George Lucas's property for four billion dollars, knowing they will turn enormous profits on that investment for decades, and makes us a compelling Star Wars entirely cynically, built literally out of the formulaic building blocks of the original, but it works as well, as wonderfully -- distractingly, entertainingly, wonderfully -- as the original, does that matter? And what does it say about art, and about its audience?

Saturday, November 14, 2015

Working with a Thermistor: a C Programming Example

Recently I worked on a project that needed to monitor temperature using a thermistor. A thermistor is a resistor that measures temperature: the resistance changes depending on how hot it is. They are used in all kinds of electronic devices to monitor temperature and keep components from oveheating.

I searched for a good, simple worked-out example for how to get an accurate reading from the thermistor, but had trouble finding something readable. I am not actually an electrical engineer and have never studied the math behind thermistors formally, but I was able to adapt the formulas from some existing sources, with a little help. I am sharing this example in the hope that it might be useful to someone trying to solve a similar problem. Please note the way I have adapted the general thermistor math to our particular part and circuit; unless you have an identical part and measurement circuit, you will probably not be able to use this example exactly "as-is."

As I understand it, most modern thermistor components are "NTC," which means that they have a "negative temperature coefficient," meaning that their resistance has an inverse relationship to temperature: higher temperature, lower resistance. Thermistors have highly non-linear response, and are usually characterized by the Steinhart-Hart equation. This equation is a general equation that can be parameterized to model the response curve associated with a specific thermistor device. The original form of the equation takes three coefficients, A, B, and C, and describes the relationship between thermistor resistance and temperature in degrees Kelvin (K). It turns out that the three-coefficient form is overkill for a lot of parts and their response curve can be characterized accurately with a single parameter, using a simplified version of the equation This single parameter is called "beta" and so the equation can be called the Beta Parameter Equation.

Reading a thermistor is complicated by the fact that in a typical application we are first using resistance as a measurment of, or proxy for, temperature; that's the basic thing a thermistor does. But in a circuit we don't read resistance directly; instead, we would typically read voltage as a measure of, or proxy for, resistance. To read the resistance from a thermistor we treat it like we would treat a variable resistor, aka potentiometer. We use a voltage divider. This consists of two resistors in series. In our case we place the thermistor after a fixed resistor, and tap the voltage in between. This goes to an ADC - and analog-to-digital converter. I'm going to assume that you already have a reasonably accurate ADC and working code to take a reading from it.

So now I'm going to describe how I took the general thermistor math and adapted it for a specific part and circuit. Our specific thermistor is a Murata NCP18XH103F03RB. So you can Google the vendor and part number and find a datasheet. You need to find out a few things from the datasheet, specifically the nominal resistance at the reference temperature, which is usually 25 degrees Celsius, or 298.15K (or if it is not, note the reference voltage). Also, the datasheet should specify the beta value for your part; in our case, it is 3380.

The beta parameter equation, solved for resistance, reads:

Rt = R0 * e^( -B * ( 1 / T0 - 1 / T ) )

Where Rt is the resistance as a proxy for temperature, e is the mathematical constant e, B is beta, T0 is the reference temperature in K, and T is the measured temperature in degrees Kelvin. We want temperature given resistance, so we can solve it for temperature, like so:

T = B / ln( R / ( R0 * e^( -B / T0 ) ) )

Plugging in our R0 = 10,000 ohms, B = 3380, and T0 = 298.15 K we get:

t = 10000 * e^( -3380 * ( 1 / 298.15 - 1 / T ) )


T = 3380 / ln( R / ( 10000 * e^( -3380 / 298.15 ) ) )

Now, we need to have something to plug in for R, given the fact that we're reading a voltage from a voltage divider. In our case, the fixed resistor in our voltage divider has the same resistance value in ohms as the nominal resistance for our thermistor at 25 C, 10 kohms (10,000 ohms). Our voltage going into the voltage divider is 2.5V. The standard formula for a voltage divider like this, arranged with the fixed resistor first in the series, before the thermistor, is:

V = 2.5 * ( R / ( 10000 + R ) )

If your thermistor comes before the fixed resistor, you will want to swap the two R values values (see the Wikipedia article on voltage dividers I mentioned above). To get resistance given voltage, we can solve the above for R:

R = 20000 * v / ( 5 - 2 * v )

Now we've got a formula that we can use to convert a voltage reading to a thermistor resistance reading R. We can actually plug the right hand side of that right into our beta parameter equation from above, replacing R:

T = 3380 / ln( ( 20000 * v / ( 5 - 2 * v ) ) / ( 10000 * e^( -3380 / 298.15 ) ) )

That looks kind of monstrous; it really seems like this ought to be simpler than that. But Wolfram Alpha could simplify it when my own algebra skills gave out. You can just go to the Wolfram Alpha site and paste in that equation, being careful to get the parentheses in the right place. You will want to change the T to a t so that Wolfram Alpha interprets it as a variable, rather than Tesla units. Here's the result. Note that Wolfram Alpha has provided a nicely simplified version of the equation, perfect for our needs:

t = 3380 / ln( 167665 * v / ( 5 - 2 * v ) )

That equation describes temperature, in K, as a function of the measured voltage from our specific thermistor and voltage divider circuit. Again, keep in mind that unless you have an identical part and circuit, you will not be able to use this formula as-is. Testing against a thermocouple and hand-held infrared thermometer suggests, so far, that our temperature readings seemed to be accurate to within a degree C. We have not tested it with extremely high or low temperatures yet, but I expect it to be reasonably accurate; for this application, which involves setting fan speed and determining if we need to shut down components, we don't need a high degree of accuracy.

Finally, remember that the results of this formula are in degrees Kelvin. A C programming language expression for converting degrees Kelvin to degrees Celsius is simply:

k - 273.15F

where k is a floating-point value representing degrees Kelvin. Similarly, you can convert to degrees Fahrenheit like so:

k * 9.0F / 5.0F - 459.67F

and the C expression to implement our voltage-to-temperature function is:

3380.0F / log( ( 167665.0F * v ) / ( 5.0F - ( 2.0F * v ) ) )

where v is a floating-point value representing the voltage from our voltage divider, and log is the C programming language's natural logarithm function, part of the C standard library of math functions.

I hope this has been helpful. Please leave a comment (note: comments are moderated, so you will not see them appear immediately) if you were able to adapt this approach to your project! If you have a question, you can leave a comment too, although keep in mind that I'm not actually an electrical engineer and so would be better at answering programming questions than circuit design questions. Happy measuring!

Thursday, February 05, 2015

A Deep Dive: the Velocity Manufacturing Simulation

In 1989 I graduated from the College of Wooster and then spent a year as an intern with Academic Computing Services there, writing newsletters and little software tools. In the summer of 1990 I moved to Ann Arbor, without a clear idea what I was going to do next.

I worked for a short while with the Department of Anthropology, but by the end of 1990, I had found a job with the Office of Instructional Technology.

OIT was sort of the University's answer to the MIT Media Lab. It was an organization where instructional designers, programmers, and faculty members could work together on projects to bring technology into classrooms. It was a pretty remarkable workplace, and although it is long gone, I am truly grateful for the varied experiences I had there. It was the early days of computer multimedia, a sort of wild west of platforms and tools, and I learned a lot.

In January of 1993 my girlfriend and her parents visited my two workplaces, OIT headquarters and the Instructional Technology Lab, a site in the Chemistry building. I handed my girlfriend a video camera and proceeded to give a very boring little talk to her, and her extremely patient parents. Wow, I was a geek. I'd like to think my social skills and ability to make eye contact are a lot better now, but I probably haven't changed as much as I imagine that I have. I'm an extraverted geek now: when I am having a conversation with you, I can stare at your shoes.

I have carried the original analog Hi-8 videocassette around through many moves, and life changes, and only today figured out a good way to get it into my computer -- after giving the camcorder heads a very thorough cleaning. I thought the tape was pretty much a lost cause, and was going to try working with my last-ditch backup, a dub to VHS tape, but I'm pleased to learn that the video is still playable, and pleased that I could finally get this made, such as it is.

This project, the Velocity Manufacturing Simulation, was written in Visual BASIC, long before it became VB.NET. I remember that it involved a fair amount of code, although I don't have the source to look at. I remember painstakingly writing code for GUI elements like the animated disclosure triangles. There was some kind of custom controls library we bought separately; the details escape me. There was some kind of ODBC (maybe?) database plug-in that I can barely recall; I think Pete did most of the work on that part. Pete wrote parts of it, and I wrote parts of it. Now it seems almost laughably primitive, but you'll just have to take my word for it that back in the day it seemed pretty cool. It won an award. As far as I know, this is the only video footage of the project.

The code is 147 years old in Internet years. It was almost half my lifetime ago. But at the same time it seems like I just left that office, and somehow if I could figure out where it was, I could still go back and find everyone there in the conference room having lunch, and after lunch settle back into my old office with the vintage, antique computers.

This was only one of several projects I worked on while I worked at OIT. I have some other bits of video for a few of them, but not all. I will get clips up for at least one more. I wish there was more tape, and better tape, even if the only one nostalgic about these projects is me.

Perhaps "enjoy" is the wrong word, but take a moment to remember what instructional multimedia was like, a few months before a group called NCSA released a program called Mosaic and the world started to hear about this exciting new thing called the World Wide Web... but grandpa's tired, kids, and that's a story for a different day.

Wednesday, November 13, 2013

Apple Breaks Apache Configurations for Gitit (Again)

I'm not quite sure why I put myself through this, but I upgraded my Mac Pro to Mavericks. This broke my local Gitit Wiki. The symptom was that Apache was unable to start, although nothing would be written in the error logs. To determine what was wrong I used sudo apachectl -t. The installer did preserve my http.conf, but wiped out the library that I had installed in /user/libexec/apache2. See this old entry that I wrote back when I fixed it for Mountain Lion here.

I installed XCode 5 and I thought I was set, but there is more breakage. You might need to run xcode-select --install to get headers in /usr/include. The makefile /usr/share/httpd/build/ is still broken in Mavericks, so commands like sudo apxs -ci -I /usr/include/libxml2 mod_xml2enc.c won't work.

To make a long story short, I got the latest (development) version of the mod_proxy_html source, these commands worked for me:

sudo /usr/share/apr-1/build-1/libtool --silent --mode=compile --tag=CC /usr/bin/cc -DDARW
IN -DSIGPROCMASK_SETS_THREAD_MASK -I/usr/local/include -I/usr/include/apache2 -I/usr/include/apr-1 -I/usr/include/libxml2 -I
. -c -o mod_xml2enc.lo mod_xml2enc.c && sudo touch mod_xml2enc.slo


sudo /usr/share/apr-1/build-1/libtool --silent --mode=compile --tag=CC /usr/bin/cc -DDARW
IN -DSIGPROCMASK_SETS_THREAD_MASK -I/usr/local/include -I/usr/include/apache2 -I/usr/include/apr-1 -I/usr/include/libxml2 -I
. -c -o mod_proxy_html.lo mod_proxy_html.c && sudo touch mod_proxy_html.slo

Previously, this gave me .so files in the generated .libs directory, but now I just have .o files and I'm not sure that's what I want.

Sunday, August 11, 2013

More Crappy Print-on-Demand Books -- for Shame, Addison-Wesley "Professional"

So, a while back I wrote about some print-on-demand editions that didn't live up to my expectations, particularly in the area of print quality -- these Tor print-on-demand editions.

Now, I've come across one that is even worse. A few days ago I ordered a book from Amazon called Imperfect C++ by Matthew Wilson -- it's useful, thought-provoking material. Like the famous UNIX-Hater's Book, it's written for people with a love-hate relationship with the language -- that is, those who have to use it, and who desperately want to get the best possible outcomes from using it, writing code that is as solid and portable as possible, and working around the language's many weaknesses. (People who haven't use other languages may not even be aware that something better is possible and that complaints about the language are just sour grapes; I'm not really talking to those people).

The universe sometimes insists on irony. My first copy of Imperfect C++ arrived very poorly glued; the pages began falling out as soon as I opened the cover and began to read. And I am not hard on books -- I take excellent care of them.

So I got online and arranged to return this copy to Amazon. They cross-shipped me a replacement. The replacement is even worse:

Not only are the pages falling out, because they were not properly glued, but the back of the book had a big crease:

So I guess I'll have to return both.

I'll look into finding an older used copy that wasn't print-on-demand. But then of course the author won't get any money.

Amazon, and Addison-Wesley, this is shameful. This book costs $50, even with an Amazon discount. I will be sending a note to the author. I'm not sure there is much he can do, but readers should not tolerate garbage like this. Amazon, and Addison-Wesley, fix this! As Amazon approaches total market dominance, I'm reminded of the old Saturday Night Live parody of Bell Telephone: "We don't care. We don't have to. We're the Book Company."

Thursday, August 01, 2013

Arduino, Day 1

A friend of mine sent me a RedBoard and asked me to collaborate with him on a development idea. So I'm playing with an Arduino-compatible device for the first time. I've been aware of them, but just never got one, in part because after writing embedded code all day, what I've wanted to do with my time off is not necessarily write more embedded code.

I downloaded the Arduino IDE and checked that out a bit. There are some things about the way it's presented that drive me a little batty. The language is C++, but Arduino calls it the "Arduino Programming Language" -- it even has its own language reference page. Down at the bottom the fine print says "The Arduino language is based on C/C++."

That repels me. First, it seems to give the Arduino team credit for creating something that they really haven't. They deserve plenty of credit -- not least for building a very useful library -- but not for inventing a programming language. Second, it fails to give credit (and blame) for the language to the large number of people who actually designed and implemented C, C++, and the GCC cross-compiler running behind the scenes, with its reduced standard libraries and all. And third, it obfuscates what programmers are learning -- especially the distinction between a language and a library. That might keep things simpler for beginners but this is supposed to be a teaching tool, isn't it? I don't think it's a good idea to obfuscate the difference between the core language (for example, bitwise and arithmetic operators), macros (like min), and functions in the standard Arduino library. For one thing, errors in using each of these will result in profoundly different kinds of diagnostic messages or other failure modes. It also obfuscates something important -- which C++ is this? Because C++ has many variations now. Can I use enum classes or other C++11 features? I don't know, and because of the facade that Arduino is a distinct language, it is harder to find out. They even have the gall to list true and false as constants. If there's one thing C and C++ programmers know, and beginners need to learn quickly, it's that logical truth in C and C++ is messy. I would hate to have to explain to a beginner why testing a masked bit that is not equal to one against true does not give the expected result.

Anyway, all that aside, this is C++ where the IDE does a few hidden things for you when you compile your code. It inserts a standard header, Arduino.h. It links you to a standard main(). I guess that's all helpful. But finally, it generates prototypes for your functions. That implies a parsing stage, via a separate tool that is not a C++ compiler.

On my Mac Pro running Mountain Lion, the board was not recognized as a serial device at all, so I had to give up using my Mac, at least until I can resolve that. I switched over to Ubuntu 12.04 on a ThinkPad laptop. The IDE works flawlessly. I tried to follow some directions to see where the code was actually built by engaging a verbose mode for compilation and uploading, but I couldn't get that working. So I ditched the IDE.

This was fairly easy, with the caveat that there are a bunch of outdated tools out there. I went down some dead ends and rabbit holes, but the procedure is really not hard. I used sudo apt-get install to install arduino-core and arduino-mk.

There is now a common makefile in my /usr/share/arduino directory and I can make project folders with makefiles that refer to it. To make this work I had to add a new export to my .bashrc file, export ARDUINO_DIR=/usr/share/arduino (your mileage may vary depending on how your Linux version works, but that's where I define additional environment variables).

The Makefile in my project directory has the following in it:

BOARD_TAG    = uno
ARDUINO_PORT = /dev/serial/by-id/usb-*
include /usr/share/arduino/
And nothing else! Everything else is inherited from the common I can throw .cpp and .h files in there and make builds them and make upload uploads them.

If you have trouble with the upload, you might take a look at your devices. A little experimentation (listing the contents of /dev before and after unpluging the board) reveals that the RedBoard is showing up on my system as a device under /dev/serial -- in my case, /dev/serial/by-id/usb-FTDI_FT232R_USB_UART_A601EGHT-if00-port0 and /dev/serial/by-path/pci-0000:00:1d.0-usb-0:2:1.0-port0 (your values will no doubt vary). That's why my Makefile reads ARDUINO_PORT = /dev/serial/by-id/usb-* -- so it will catch anything that shows up in there with the usb- prefix. If your device is showing up elsewhere, or you have more than one device, you might need to tweak this to properly identify your board.

When you look at the basic blink demo program in the Arduino IDE, you see this, the contents of an .ino file (I have removed some comments):

int led = 13;

void setup() {                
    // initialize the digital pin as an output.
    pinMode(led, OUTPUT);     

// the loop routine runs over and over again forever:
void loop() {
    digitalWrite(led, HIGH);   // turn the LED on (HIGH is the voltage level)
    delay(1000);               // wait for a second
    digitalWrite(led, LOW);    // turn the LED off by making the voltage LOW
    delay(1000);               // wait for a second

The Makefile knows how to build an .ino file and inserts the necessary header, implementation of main, and generates any necessary prototypes. But if you want to build this code with make as a .cpp file, it needs to look like this:

#include <Arduino.h>

int led = 13;

void setup() {
    // initialize the digital pin as an output.
    pinMode(led, OUTPUT);

// the loop routine runs over and over again forever:
void loop() {
    digitalWrite(led, HIGH);   // turn the LED on (HIGH is the voltage level)
    delay(1000);               // wait for a second
    digitalWrite(led, LOW);    // turn the LED off by making the voltage LOW
    delay(1000);               // wait for a second

int main(void)

#if defined(USBCON)


    for (;;) {
        if (serialEventRun) serialEventRun();

return 0;


And there it is -- C++, make, and no IDE. Relaxen and watchen Das blinkenlights!