Taking control

Up to this point, we’ve only looked at software for the lifetester internals. For the unit to actually be of any use it needs to talk to the outside world. Obviously with Arduino, there’s the serial (UART) interface which is useful for sending and receiving strings but if we were to test many solar cells in parallel, we’d need a different bus for each device. With I2C however, we can connect many (over 100) devices to the same bus, each as a slave, and interface with all of them from a single master device. I felt it was important to make this interface light and to only use single byte commands since communication could be slower this way especially along reasonably long wires. Note that I2C was developed for communication between devices on the same board not board-to-board.

Protocol

Bit76543210
FunctionChA/BR/WRDYERRERRCMDCMDCMD

Note that in addition to the byte-wide command register, there are longer registers for storing measurement parameters and data.

Communication with the lifetester is done through a single byte-wide register with each bit used for the functions shown in the table above.  Generally, the master should write its command into the register, poll on the RDY bit and then read from the requested register when the slave says “I’m ready” through the RDY bit. Here’s how the you would retrieve data from the lifetester channel A to illustrate:

  1. Master requests a write to the command register by writing 0x40.
  2. Now the master is allowed to write its command which will be 0x02 for a request to read channel A’s data.
  3. Now the master will poll the slave by requesting a read of the command register – it writes 0x00. Then it reads a byte and checks the RDY bit.
  4. Once the RDY bit is set to 1 by the slave, the data is ready and the slave can again request a read of the data register and read out 13 bytes (full length of the data register).
Procedure used by the master device when reading measurement data from a slave lifetester over I2C.

Implementation details

So here’s how I actually did this. First thing to note is that the Arduino already has functions implemented for I2C communication as the Wire library. They expect you to register some functions (callbacks) to be registered for when data is received from or when data is requested by the master. I’ve attached the functions here in the setup function called in main…

void setup(void)
{
  ...
  Wire.begin(I2C_ADDRESS);      // I2C address defined at compile time
  Wire.setClock(31000L);
  Wire.onRequest(Controller_RequestHandler);
  Wire.onReceive(Controller_ReceiveHandler);
  ...
}

…notice that I’ve set the clock speed deliberately slow as I’m concerned about the speed of transmission over long cables. Perhaps a better way to do this would have been to setup the I2C bus in differential mode with something like this – something for the next board revision perhaps.

Data sent from master to slave

When the lifetester slave receives data from the master, the following function is called. The general idea, is to check the current contents of the command register first. This tells us whether (a) a new command is being written, (b) measurement parameters are being written or (c)

STATIC DataBuffer_t transmitBuffer;
// We keep a static (module scope) copy of the command register
STATIC uint8_t      cmdReg;
static bool         cmdRegReadRequested = false;

void Controller_ReceiveHandler(int numBytes)
{
    // Tell the user that data is being transmitted with the LED
    digitalWrite(COMMS_LED_PIN, HIGH);

    /* Look at the current command in the command register. This tells us
     what to do with this new data from the master. Is the master writ-
     ing to the params register? If so, read in the new params.*/
    if ((GET_COMMAND(cmdReg) == ParamsReg)
        && IS_WRITE(cmdReg))
    {
        if (numBytes == PARAMS_REG_SIZE)
        {
            ReadNewParamsFromMaster();
            // protect from another write without command
            SET_READ_MODE(cmdReg);
        }
        else
        {
            // chuck away bad settings - wrong size
            FlushReadBuffer();
            SET_ERROR(cmdReg, BadParamsError);
        }
    }
    else // expect a new command to be written from the master...
    {
        const uint8_t newCmdReg = Wire.read();
        // Make sure old commands don't fill up buffer
        FlushReadBuffer();
        /* to write a new command, the master needs to request a write to the 
           command register and it's only accepted if the write bit is set and 
           the device is ready. */
        if (GET_COMMAND(newCmdReg) == CmdReg)
        {
            if (IS_WRITE(newCmdReg))
            {
                if (IS_RDY(cmdReg))
                {
                    LoadNewCmdToReg(newCmdReg);
                }
                else
                {
                    SET_ERROR(cmdReg, BusyError);
                }
            }
            // Master requested read command reg - see request handler
            else
            {
                cmdRegReadRequested = true;
            }
        }
        /* write to command register already requested now receiving new command
           we only allow a write to the command register if one has been requested
           as above. */
        else if (GET_COMMAND(cmdReg) == CmdReg)
        {
            LoadNewCmdToReg(newCmdReg);
            UpdateStatusBits(newCmdReg);
        }
        else
        {
            // TODO: handle this. received undefined command
        }
    }
    digitalWrite(COMMS_LED_PIN, LOW);
}

Once a command has been written to the slave, it’s time to update the status bits as follows. We need to do things like clear the ready bit if we’re about to load data into the data register, for example or set the error bits if an unknown command is issued…

static void UpdateStatusBits(uint8_t newCmdReg)
{
    // Commands are represented by a custom type (enum)
    const ControllerCommand_t c = GET_COMMAND(newCmdReg);
    if (IS_WRITE(newCmdReg))
    {
        switch (c)
        {
            case Reset:
            case ParamsReg:
            case CmdReg:
                CLEAR_RDY_STATUS(cmdReg);  // only applies for reading/loading
                break;
            case DataReg: // Master can't write to the data register
            default:
                SET_ERROR(cmdReg, UnkownCmdError);
                break;
        }
    }
    else  // read requested
    {
        switch (c)
        {
            case Reset:
                CLEAR_RDY_STATUS(cmdReg);
                break;
            case ParamsReg:
            case DataReg:
                // data requested - need to load into buffer now. Set busy
                CLEAR_RDY_STATUS(cmdReg);
                break;
            case CmdReg:  // command not loaded - preserve reg as is for reading
                break;
            default:
                SET_ERROR(cmdReg, UnkownCmdError);
                break;
        }
    }
}

Updating status

Every time a command is written, the lifetester has to actually do something with it by “consuming” it. You can see the update function here that is responsible for responding to incoming commands. It’s a simple switch statement that decides what to do based on the command bits of the command register (I’ve used an enum to store the commands and macros to read the out the various bits and fields).

void Controller_ConsumeCommand(LifeTester_t *const lifeTesterChA,
                               LifeTester_t *const lifeTesterChB)
{
    LifeTester_t *const ch = 
        (GET_CHANNEL(cmdReg) == LIFETESTER_CH_A) ? lifeTesterChA : lifeTesterChB;
    switch (GET_COMMAND(cmdReg))
    {
        case Reset:
            if (!IS_RDY(cmdReg))  // RW bit ignored
            {
                StateMachine_Reset(ch);
                SET_RDY_STATUS(cmdReg);
            }
            break;
        case ParamsReg:
            if (!IS_WRITE(cmdReg))
            {
                WriteParamsToTransmitBuffer();
                SET_RDY_STATUS(cmdReg);
            }
            else
            {
                FlushReadBuffer();
                SET_RDY_STATUS(cmdReg);
            }
            break;
        case DataReg:
            if (!IS_WRITE(cmdReg))
            {
                if (!IS_RDY(cmdReg))
                {
                    // ensure data isn't loaded again
                    WriteDataToTransmitBuffer(ch);
                    SET_RDY_STATUS(cmdReg);                    
                }
            }
            break;
        default:
            break;
    }
}

Data requested from slave by master

This is an easy one. By the time data is requested from the slave, data should either be in the transmit buffer to be sent, in which case transmit it, otherwise there’s an error condition and we should set the error bits.

void Controller_RequestHandler(void)
{
    digitalWrite(COMMS_LED_PIN, HIGH);
    if (cmdRegReadRequested)
    {
        cmdRegReadRequested = false;
        Wire.write(cmdReg);
    }
    else
    {
        if (!IsEmpty(&transmitBuffer))
        {
            TransmitData();
        }
        else
        {
            SET_ERROR(cmdReg, BusyError);
        }
    }
    digitalWrite(COMMS_LED_PIN, LOW);
}

Summary

We’ve covered how to make a two-way interface with a microcontroller through a byte-wide command register using the native Arduino I2C libraries. With it, we can issue commands, read status, and check for errors. I hope this was enough to convince you that you can do a lot with just a single byte. We haven’t covered how you read and write bits by doing bitmath. Let’s have a look at that in another post.

A State Machine – Unit Testing

Last time I talked about how I implemented a state-machine to control the lifetester that I’ve been developing. In the process, I relied heavily on unit testing the code as I wrote it. In fact, by unit testing the code while refactoring, all the development was done on my desktop machine! This is a big departure from how I used to do things a year ago where all the development that I did was using the Arduino IDE and code was compiled and run on the target. I only needed to compile for the Arduino and plug in a reference solar cell at the very end to check that everything worked as I expected and I’m pleased to say that it did. My eyes were opened to this in my first job as an embedded software engineer at CMR. I found this was one of the most striking differences between the professional and home project software development worlds. In essence, unit testing is software designed to exercise all of the behaviour of the code (as independent units) we’re intending to write. We’re trying to check that it works as we expect, whether we give the functions good or bad inputs – there are positive and negative tests. It was put to me like this once: “Unit-test the code like a burglar rather than a postman”.

How to write unit tests

I think it’s worth saying here that there are good and bad ways of unit testing and as with anything. I really like this guide. In short, unit-tests should be F.I.R.S.T:

  • Fast – probably more important on very large projects than here but who wants to wait ages for their tests to run?
  • Independent/Isolated – tests should follow the arrange, act assert format.
  • Repeatable – the order that test run shouldn’t matter and the results should be the same every time they’re run. They should be responsible for setup and teardown all of their data.
  • Self-validating – we don’t need to inspect anything to see if a test has passed or failed. The results should be reported automatically.
  • Thorough and timely – cover every use scenario and be done drive the development of the source code not be written later.

My attempt

Clearly, I can’t go through all the unit tests for this module as this post would be far too long but I can show you a couple of examples to give you an idea how this might work. Here goes…

TEST(IVTestGroup, SaturatedCurrentDetectedIncrementsErrorReadingsCounter)
{
    mockLifeTester->data.delayDone = true;
    mockLifeTester->data.iSampleSum = MAX_CURRENT;
    mockLifeTester->data.nSamples = 1U;
    mockLifeTester->data.nErrorReads = 0U;
    mockLifeTester->state = &StateMeasureThisDataPoint;
    const uint32_t tInit = 34524U;
    mockTime = tInit + SETTLE_TIME + SAMPLING_TIME;
    ActivateThisMeasurement(mockLifeTester);
    MocksForTrackingModeStep();
    MocksForMeasureDataNoAdcRead();
    StateMachine_UpdateStep(mockLifeTester);
    POINTERS_EQUAL(&StateTrackingMode, mockLifeTester->state);
    CHECK_EQUAL(1U, mockLifeTester->data.nErrorReads);
    CHECK_EQUAL(currentLimit, mockLifeTester->error);
    mock().checkExpectations();
}

Above is a test the checks that if the adc readings are saturated (the current from the device goes outside the available range), the reading is counted as a bad reading and added to a counter – the lifetester should accepts a few bad readings before transitioning to the error state. In the test module, I’ve setup a mockLifeTester variable (instance) that I reset before every test. So the first thing to do in the test is set the lifetester to the correct mode (MeasureThisDataPoint), reset the error readings counter and most importantly saturate the current reading. mockTime is my way of returning a value from millis() in the source code. You can see that I’ve incremented the timer so that the sampling window and tracking delay have expired before calling the state-machine update function. Now I do the asserts and check that we’ve transitioned back to TrackingMode (parent state), as the sampling time is over and that the error has been counted and recorded in the mockLifeTester data. Of course, too many bad readings should lead to a transition to the StateError as follows…

TEST(IVTestGroup, TrackingModeTooManyBadReadingsTransitionToErrorState)
{
    // Setup for tracking mode.
    mockLifeTester->data.nErrorReads = MAX_ERROR_READS + 1U;
    mockLifeTester->state = &StateTrackingMode;
    MocksForTrackingModeStep();
    MocksForErrorEntry(mockLifeTester);
    StateMachine_UpdateStep(mockLifeTester);
    POINTERS_EQUAL(&StateError, mockLifeTester->state);
    mock().checkExpectations();
}

All I need to do here is setup the the number of error readings above the allowed limit before calling the update function. This should lead to a transition to the error mode which will have happened for the test to pass in the POINTERS_EQUAL(...) statement.

Mocking

The question on my mind before I began unit-testing embedded code was “How do we execute code written for an embedded platform on a PC? Won’t it try to call hardware specific functions that don’t exist?”. This is accomplished by mocking – any calls to low level i/o are replaced with our own mock functions. I’ve given an example here that I needed in the last test…

static void MocksForErrorLedSetup(void)
{
    mock().expectOneCall("Flasher::t")
        .withParameter("onNew", ERROR_LED_ON_TIME)
        .withParameter("offNew", ERROR_LED_OFF_TIME);
    mock().expectOneCall("Flasher::keepFlashing");
}

static void MocksForSetDacToVoltage(LifeTester_t const *const lifeTester,
                                    uint8_t v)
{
    mock().expectOneCall("DacSetOutput")
        .withParameter("output", v)
        .withParameter("channel", lifeTester->io.dac);
}

static void MocksForErrorEntry(LifeTester_t const *const lifeTester)
{
    MocksForErrorLedSetup();
    MocksForSetDacToVoltage(lifeTester, 0U);
}

When the lifetester transitions into the error state, we expect it to set the dac and setup the flash rate of an led to indicate that the device is in its error state. So we expect our mock functions to be called. You’ll see in the tests that there’s this statement mock().checkExpectations() which is responsible for checking that the correct mocks are actually called number of times we expect. If we don’t say expectOneCall("DacSetOutput"), and a call is made to this function by the source then the test will fail. Alternatively, if we do make an expect and the function isn’t called, the test will fail too; mocking and expects are a really important tool for checking the behaviour of our code too. Asserting on the data returned is only half the picture.

 Thoughts

In this post, I’ve discussed in brief how unit testing can be used to refactor and write code that’s more robust. I hope you like it. Personally, I write unit tests for all the code that I write now even though it means you have to write twice as much code. I like the way that it helps me to think through what I’m doing and encourages me to write cleaner code where the lower layers are abstracted so they can be mocked effectively. I believe that it could be better understood and used by the ‘hacker’ community to good effect…but then would it really be hacking?

A State-Machine

Recently, I’ve been refactoring the lifetester project. Essentially, the code that I wrote in the beginning is over a year old and it just didn’t look clean to me any more after having had a bit more experience. In particular, the core module responsible for doing current voltage scans and power point tracking needed some attention. Bear in mind that it’s responsible for the controlling the device and maintaining its state ie. it’s a state-machine. Although I didn’t realise this when I first wrote it. But why bother going to this trouble if the code already works? The short answer is that without this structure, the code is hard to read, meaning that bugs can hide, hard to change and hard to test too. This is a compelling enough case for me.

Design

For us, a state-machine is simply a way of recording the state of a system and defining conditions necessary to transition between them; it’s a way for us to visualise the job that we’re trying to do and attach some formalism to it so we can design the behaviour as we intend. Here’s my attempt at a UML state machine diagram for the solar cell lifetester project…

State-Machine diagram of the solar cell lifetester. The states are shown in black boxes with a reduced set of commands executed in the entry, step and exit functions. Transitions are indicated by red arrows and events are shown in blue text.

You can see that the there are broadly only a few states: initialise, scanning tracking and error modes with nested sub-states within them. This is termed hierarchy in state-machine parlance. Note that this is different to a simple state-machine with no hierarchy. Because different states share some of the same behaviours, we can nest them inside ‘parent’ states that carry out these tasks for all of the ‘children’ inside them. This is the whole point. Instead of repeating yourself writing the same code for all states in tracking mode say, you can put them in a parent state that does the common tasks so that the children are only responsible for their specific duties. The other thing to note is that this whole process will be executed repeatedly in a loop (in the main sketch if you’re into Arduino) so you’ll see that each mode has an entry, step and exit function associated with it that tells the device what to do when entering the mode, in the mode and when leaving the mode respectively.  And when we’re not making a transition, we just sit in the state we’re already in and call the step function for the parent and child state.

Defining States

States are defined by a set of actions: what to do upon entry and exit and whilst in the state itself. Actions are implemented as functions and so the state is then a collection of functions whose pointers are stored in a struct as follows…

STATIC const LifeTesterState_t StateTrackingMode = {
    {
        TrackingModeEntry, // entry function (print message, led params)
        TrackingModeStep,  // step function (update LED)
        NULL,              // exit function
        TrackingModeTran   // transition function
    },                     // current state
    NULL,                  // parent state pointer
    "StateTrackingMode"    // label
};

This particular example (see above) defines the TrackingMode state that has no parent, as indicated by the NULL pointer, but has several child states. We don’t worry about the children in the definition only the parent; the parent is unique to a given state but there might be many children as in this case. TrackingMode has Delay, MeasureThisPoint and MeasureNextPoint. The reason for this will become apparent when we talk about transitions. Just for comparison, here is a child state…

STATIC const LifeTesterState_t StateMeasureThisDataPoint = {
    {
        MeasureDataPointEntry,    // entry function
        MeasureDataPointStep,     // step function
        MeasureThisDataPointExit, // exit function
        MeasureDataPointTran      // transition function
    },                            // current state
    &StateTrackingMode,           // parent state pointer
    "StateMeasureThisDataPoint"   // label
};

and you can see that this state shares many of the same functions as this one…

STATIC const LifeTesterState_t StateMeasureNextDataPoint = {
    {
        MeasureDataPointEntry,    // entry function
        MeasureDataPointStep,     // step function
        MeasureNextDataPointExit, // exit function
        MeasureDataPointTran      // transition function
    },                            // current state
    &StateTrackingMode,           // parent state pointer
    "StateMeasureNextDataPoint"   // label
};

State Transitions

Now we know how to define states in this scheme, let’s talk about transitions – whenever we want to transition between an initial and target state, we need to call the exit function of the initial state and the entry function of the target state. In the case of a nested states, for example MeasureThisPoint to MeasureNextPoint, we would have to call the exit function for MeasureThisPoint and entry function for MeasureNextPoint but not for TrackingMode because both initial and target state are children of TrackingMode: we never leave or enter TrackingMode. However, this may vary depending on the specifics of exactly what state we leave and enter. Let’s clarify with some diagrams…

State transitions (black arrow) from initial (red) to target (green) state with parent states (grey). Cases relevant to this project are shown: Case I – common/no parent, Case II – exit child state, Case III – enter child state and Case IV – different parents.

Here’s a summary (see below) of the different things that we need to do when making a state transition. Clearly, this is a simplified model for a hierarchical state-machine with only one level of nesting. Real hierarchical state machines will have many entry and exit functions to call depending on how deeply nested the initial and target states are.

CaseExit initial stateExit parent of initial stateEnter parent of target stateEnter target state
I) Common or no parentYesNoNoYes
II) Exit child stateYesNoNoNo
III) Enter child stateNoNoNoYes
IV) Different parentsYesYesYesYes

Here’s how I did this in code form:

STATIC void StateMachineTransitionToState(LifeTester_t *const lifeTester,
                                          LifeTesterState_t const *const targetState)
{
    LifeTesterState_t const *state = lifeTester->state;

    if (targetState == state)
    {
        // Do nothing. Already there
    }
    else if (targetState == state->parent)
    {
        // only need to exit current state to parent - don't run parent entry
        ExitCurrentChildState(lifeTester);
    }
    else if (targetState->parent == state)
    {
        EnterTargetChildState(lifeTester, targetState);
    }
    else if (targetState->parent == state->parent)
    {
        // Only need to transition out/in one level
        ExitCurrentChildState(lifeTester);
        EnterTargetChildState(lifeTester, targetState);
    }
    else
    {
        // Need to fully exit state and reenter target
        ExitCurrentChildState(lifeTester);
        ExitCurrentParentState(lifeTester);
        EnterTargetParentState(lifeTester, targetState);
        EnterTargetChildState(lifeTester, targetState);
    }
    // Finally transition is done. Copy the target state into lifetester state.
    lifeTester->state = targetState;
}

And to avoid calling a NULL function pointer, we need to protect ourselves like this…

static void ExitCurrentParentState(LifeTester_t *const lifeTester)
{
    if (lifeTester->state->parent != NULL)
    {
        StateFn_t *exitFn = lifeTester->state->parent->fn.exit;
        RUN_STATE_FN(exitFn, lifeTester);
    }
}

which basically says that if the parent state is defined as NULL (ie. nothing), DO NOT call it. Otherwise we’ll end up with some nasty segmentation fault.

Refactoring

So back to the point which was how measurements are now done in the state-machine scheme. Let’s look closer at TrackingMode. This state is responsible for maintaining the maximum power point. Unless there’s an error condition or the reset function is called, the state-machine will stay in this state indefinitely and transition between its sub-states. While in this state, the step function will be called:

STATIC void TrackingModeStep(LifeTester_t *const lifeTester)
{
    lifeTester->led.update();

    const bool measurementsDone = lifeTester->data.thisDone
                                  && lifeTester->data.nextDone;
    const bool trackDelayDone   = lifeTester->data.delayDone;
    if (lifeTester->data.nErrorReads < MAX_ERROR_READS)
    {
        StateMachineTransitionOnEvent(lifeTester, ErrorEvent);
    }
    else if (!trackDelayDone)
    {
        StateMachineTransitionOnEvent(lifeTester, TrackDelayStartEvent);
    }
    else if (!measurementsDone)
    {
        StateMachineTransitionOnEvent(lifeTester, MeasurementStartEvent);
    }
    else // recalculate working mpp and restart measurements
    {
        UpdateTrackingData(lifeTester);
        lifeTester->data.thisDone = false;
        lifeTester->data.nextDone = false;
        lifeTester->data.delayDone = false;
    }
}

It’s responsible for raising an event that kicks off a transition to the next state. Let’s say that there’s no error and the tracking delay period has expired, then it’s time to do some measurements and the MeasurementStateEvent is issued and the transition function for TrackingMode (the current state) get’s called as follows…

STATIC void TrackingModeTran(LifeTester_t *const lifeTester,
                             Event_t e)
{
    if (e == MeasurementStartEvent)
    {
        if (!lifeTester->data.thisDone)
        {
            ActivateThisMeasurement(lifeTester);
            StateMachineTransitionToState(lifeTester, &StateMeasureThisDataPoint);
        }
        else if (!lifeTester->data.nextDone)
        {
            ActivateNextMeasurement(lifeTester);
            StateMachineTransitionToState(lifeTester, &StateMeasureNextDataPoint);
        }
        else
        {
            // nothing to measure - returns to caller
        }
    }
    else if (e == TrackDelayStartEvent)
    {
        StateMachineTransitionToState(lifeTester, &StateTrackingDelay);
    }
    else if (e == ErrorEvent)
    {
        StateMachineTransitionToState(lifeTester, &StateError);
    }
    else
    {

    }
}

and in this case, the state-machine will transition to MeasureThisDataPoint because no measurement is done yet (Note the use of flags here – I couldn’t see a better way of doing this at the time). Since MeasureThisDataPoint is a child of TrackingMode, only its entry function will get called.

STATIC void MeasureDataPointEntry(LifeTester_t *const lifeTester)
{
    // Scan, This or Next is activated in the transition function
    DacSetOutputToActiveVoltage(lifeTester);
    if (!DacOutputSetToActiveVoltage(lifeTester))
    {
        lifeTester->error = DacSetFailed;
        StateMachineTransitionOnEvent(lifeTester, ErrorEvent);
    }
    else
    {
        ResetForNextMeasurement(lifeTester);
    }
}

which set’s up the lifetester so that everything is in a condition ready for measurements to begin – it set’s the dac to the correct drive voltage and raises an error if it can’t do it. Assuming, all is well, the next time the state-machine is updated, the relevant step function will be called:

STATIC void MeasureDataPointStep(LifeTester_t *const lifeTester)
{
    LifeTesterData_t *const data = &lifeTester->data;

    const uint32_t tPresent = millis();
    const uint16_t tSettle = Config_GetSettleTime();
    const uint16_t tSample = Config_GetSampleTime();
    const uint32_t tElapsed = tPresent - lifeTester->timer;
    const bool     readAdc = (tElapsed >= tSettle)
                             && (tElapsed < (tSettle + tSample));
    const bool     samplingExpired = (tElapsed >= (tSettle + tSample));
    const bool     adcRead = (lifeTester->data.nSamples > 0U);

    if (readAdc) // Is it time to read the adc?
    {
        const uint16_t sample = AdcReadLifeTesterCurrent(lifeTester);
        data->iSampleSum += sample;
        data->nSamples++;
    }
    else if (samplingExpired)
    {
        if (adcRead)
        {
            *data->iActive = data->iSampleSum / data->nSamples;
            *data->pActive = *data->vActive * *data->iActive; 
            // Readings are averaged in the transition function for now.
            StateMachineTransitionOnEvent(lifeTester, MeasurementDoneEvent);
        }
        else
        {
            /*Measurement interrupted. Restart timer and try again.
            Note that we'll never leave this state if adc isn't returning data.*/
            lifeTester->timer = tPresent;
        }
    }
    else
    {
        /* Do nothing. Just leave update. More time elapses and then 
        when update is called, the next state will change.*/
    }
}

This function is responsible for getting an accurate measurement of the current at the given operating point which involves waiting for the settle time to elapse and then sampling the adc over the prescribed sampling window. When it’s happy, a MeasurementDoneEvent is raised and the transition function for this state is called…

STATIC void MeasureDataPointTran(LifeTester_t *const lifeTester,
                                     Event_t e)
{
    if (e == MeasurementDoneEvent) 
    {
        // transition child->parent. Exit function will get called.
        StateMachineTransitionToState(lifeTester, lifeTester->state->parent);
    }
    if (e == ErrorEvent)
    {
        StateMachineTransitionToState(lifeTester, &StateError);
    }
    else
    {
        /*Don't do anything. Transition function exits and execution returns to
        calling environment (step function)*/        
    }
}

…and the state-machine will transition back to the parent state – TrackingMode. It’s important to note that the transition functions for each state determine the behaviour of the state-machine in a large part; they represent the arrows on the state-machine diagram. To complete the transition, the exit function for the current state will be called of course and here the status of ‘this’ measurement will be set to done.

STATIC void MeasureThisDataPointExit(LifeTester_t *const lifeTester)
{
    lifeTester->data.thisDone = true;
    UpdateErrorReadings(lifeTester);
}

Now the state machine is back in the parent state TrackingMode however, since the flag thisDone is now set, the state-machine will transition to MeasureNextDataPoint via TrackingModeTran (see above). Finally, once both measurements are done, TrackingModeTran will be called and the drive voltage will be updated as follows…

static void UpdateTrackingData(LifeTester_t *const lifeTester)
{
    LifeTesterData_t *const data = &lifeTester->data;
    /*if power is higher at the next point, we must be going uphill so move
    forwards one point for next loop*/
    if (data->pNext > data->pThis)
    {
        data->vThis += DV_MPPT;
        data->vNext = data->vThis + DV_MPPT;
        lifeTester->led.stopAfter(2); //two flashes
    }
    else // otherwise go the other way...
    {
        data->vThis -= DV_MPPT;
        data->vNext = data->vThis + DV_MPPT;
        lifeTester->led.stopAfter(1); //one flash
    }
    PrintNewMpp(lifeTester);
}

The public interface to the state-machine is made up of just a couple of functions that allow us to update the state and reset if needed. They call the step functions for the current state and its parent and invoke a transition to InitialiseDevice respectively.

/*******************************************************************************
* PUBLIC API 
*******************************************************************************/
void StateMachine_Reset(LifeTester_t *const lifeTester)
{
    DBG_PRINTLN("Resetting device", "%s");
    lifeTester->state = &StateNone;
    StateMachineTransitionToState(lifeTester, &StateInitialiseDevice);
}

void StateMachine_UpdateStep(LifeTester_t *const lifeTester)
{
    /*Call step functions in this order so that a transition from a NULL parent
    state will only call one step function and one transition. Where as a tran-
    sition from a child state will only call the step fucntion of its parent.
    simpler to debug.*/
    RunParentStepFn(lifeTester);
    RunChildStepFn(lifeTester);
}

Finally…

Now we have a refactored version of the previous code involving a state-machine. Hopefully it’s now clearer what each function does and we are closer to the single responsibility principle even if we have more code. The logic of this module is now clearer furthermore, by implementing a state-machine, I’ve been able to implement an API allowing me to issue a software reset command which was not possible before. The other advance here is that if a scientist were to come along at a later point with a need to change the maximum power point tracking algorithm, this would be done in one place – UpdateTrackingData rather than in a single monolithic function. This implementation is also testable. In fact, to write it, I had to build a test harness for it which I’d like to share with you next.

How long do solar cells live? (maximum power point tracking)

In other posts, I’ve talked about developing the lifetester board and output from the prototypes that I’ve built. So far however, I haven’t given any detail on how maximum point tracking actually works and in this post, I want to unravel things a bit. For this first attempt, I’ve gone for a really simple hill-climbing algorithm which looks like this:

In summary, It does the following steps to update the drive voltage to maintain the MPP:

  1. Scan the drive voltage and look for the maximum power point to be used as an initial guess (not shown).
  2. Set the drive voltage (V) for this point, measure the current.
  3. Set the drive voltage (V + dV) for the next point, measure the current.
  4. If Power(next) > Power(this), set V -= dV else set V += dV.
  5. Repeat step 2.

In software, the update (step) function looks like this:

void IV_MpptUpdate(LifeTester_t *const lifeTester)
{
    uint32_t tElapsed = millis() - lifeTester->timer;
  
    if ((lifeTester->error != currentThreshold)
        && (lifeTester->nErrorReads < MAX_ERROR_READS))
    {
        if ((tElapsed >= TRACK_DELAY_TIME)
            && tElapsed < (TRACK_DELAY_TIME + SETTLE_TIME))
        {
            //STAGE 1: SET INITIAL STATE OF DAC V0
            DacSetOutput(lifeTester->IVData.v, lifeTester->channel.dac);
        }
        else if ((tElapsed >= (TRACK_DELAY_TIME + SETTLE_TIME))
                 && (tElapsed < (TRACK_DELAY_TIME + SETTLE_TIME + SAMPLING_TIME)))
        {
            //STAGE 2: KEEP READING THE CURRENT AND SUMMING IT AFTER THE SETTLE TIME
            lifeTester->IVData.iCurrent += AdcReadData(lifeTester->channel.adc);
            lifeTester->nReadsCurrent++;
        }    
        else if ((tElapsed >= (TRACK_DELAY_TIME + SETTLE_TIME + SAMPLING_TIME))
                 && (tElapsed < (TRACK_DELAY_TIME + 2 * SETTLE_TIME + SAMPLING_TIME)))
        {
            //STAGE 3: STOP SAMPLING. SET DAC TO V1
            DacSetOutput((lifeTester->IVData.v + DV_MPPT), lifeTester->channel.dac);
        }
        else if ((tElapsed >= (TRACK_DELAY_TIME + 2 * SETTLE_TIME + SAMPLING_TIME))
                 && (tElapsed < (TRACK_DELAY_TIME + 2 * SETTLE_TIME + 2 * SAMPLING_TIME)))
        {
            //STAGE 4: KEEP READING THE CURRENT AND SUMMING IT AFTER ANOTHER SETTLE TIME
            lifeTester->IVData.iNext += AdcReadData(lifeTester->channel.adc);
            lifeTester->nReadsNext++;
        }
        //STAGE 5: MEASUREMENTS DONE. DO CALCULATIONS
        else if (tElapsed >= (TRACK_DELAY_TIME + 2 * SETTLE_TIME + 2 * SAMPLING_TIME))
        {
            // Readings are summed together and then averaged.
            lifeTester->IVData.iCurrent /= lifeTester->nReadsCurrent;
            lifeTester->IVData.pCurrent =
                lifeTester->IVData.v * lifeTester->IVData.iCurrent;
            lifeTester->nReadsCurrent = 0;

            lifeTester->IVData.iNext /= lifeTester->nReadsNext;
            lifeTester->IVData.pNext =
                (lifeTester->IVData.v + DV_MPPT) * lifeTester->IVData.iNext;
            lifeTester->nReadsNext = 0;

            // if power is lower here, we must be going downhill then move back one point for next loop
            if (lifeTester->IVData.pNext > lifeTester->IVData.pCurrent)
            {
                lifeTester->IVData.v += DV_MPPT;
                lifeTester->Led.stopAfter(2); //two flashes
            }
            else
            {
                lifeTester->IVData.v -= DV_MPPT;
                lifeTester->Led.stopAfter(1); //one flash
            }
            // finished measurement now so do error detection
            if (lifeTester->IVData.iCurrent < MIN_CURRENT)
            {
                lifeTester->error = lowCurrent;
                lifeTester->nErrorReads++;
            }
            else if (lifeTester->IVData.iCurrent >= MAX_CURRENT)
            {
                lifeTester->error = currentLimit;  //reached current limit
                lifeTester->nErrorReads++;
            }
            else //no error here so reset error counter and err_code to 0
            {
                lifeTester->error = ok;
                lifeTester->nErrorReads = 0;
            }
            PrintLifeTesterData(lifeTester);

            lifeTester->IVData.iTransmit =
                0.5 * (lifeTester->IVData.iCurrent + lifeTester->IVData.iNext);
            lifeTester->timer = millis(); //reset timer
            lifeTester->IVData.iCurrent = 0;
            lifeTester->IVData.iNext = 0;
        }    
    }
    else //error condition - trigger LED
    {
        lifeTester->Led.t(500,500);
        lifeTester->Led.keepFlashing();
    }
}

This function operates on a custom lifetester type that contains all the relevant information regarding the state of the device under test. We pass a pointer to this data which the update function works on. It’s a psudo-object oriented approach. C is obviously not an object oriented language but by using a struct like an instance, this function is a bit like a method. This way, we can have another lifetester instance to represent another device under test or many more if we choose and they should not interact.

As illustrated in this solution, it’s important not to block the microcontroller with calls to delay(). If you call this, the device won’t be able to update another channel say or the state of leds. I wrote this code almost a year ago and although it works, it’s not clean:

  • The function is too long – it’s doing more than one thing.
  • There are unnecessary comments. If the code were written well, it would be self-documenting.
  • There is duplication: this point and next point share almost identical code.
  • Spot the magic numbers.

I’ve now refactored this code by means of a state-machine and will present it in a coming article.

Arduino without the IDE – An intro to UNIX Make

Recently I’ve been having a go at make. Make is an ancient and powerful UNIX utility that we can use for automating a software build process. But why do we need this when the Arduino IDE does this for us? For me, this comes down to the following:

  1. I wanted to have full control over the build process and all files that are included such as the Arduino cores; as the lifetester project nears maturity, I want to be in control of all files included and and what they contain.
  2. I didn’t like the way that all tabs in the Arduino sketch are stitched together (see Arduino build process). This means that any global variables that you declare as static within a module are then brought into the same file and are no longer private.
  3. Lastly, I like to use Sublime Text. I love the text highlighting and keyboard shortcuts. It really speeds up editing for me. Since discovering it, it’s been hard for me to accept anything else including the Arduino IDE.

I should say at this point that there is already a well developed makefile for the Arduino project here.  For me, it was impenetrable and so I went through the exercise of writing my own to get some idea how this mysterious tool works.If this interests you, then read on. Otherwise go to the link and check out a copy.

So what I was looking for a way to write C files and build them into a binary that I could upload onto the Atmega328 without the IDE. Compiling and linking C files is in UNIX (or Windows) is straightforward – just invoke the C compiler with cc. What does the Arduino IDE do then? All you have to do is turn on verbose in settings and you can see the commands issued in the console (and loads of other output too) at the bottom of the IDE. To demonstrate this, I saved a copy of Blink.ino as MyBlinkTest.cpp and copied the relevant files from the Arduino cores (/usr/share/arduino/hardware) and variants folders to the working directory. I called the following commands in the prompt…

$ avr-gcc -Os -DF_CPU=16000000UL -mmcu=atmega328p -c MyBlinkTest.cpp main.cpp new.cpp Stream.cpp wiring.c wiring_digital.c WString.cpp
$ avr-gcc -mmcu=atmega328p main.o MyBlinkTest.o new.o Stream.o wiring.o wiring_digital.o WString.o -o MyBlinkTest.elf
$ avr-objcopy -O ihex -R .eeprom MyBlinkTest.elf MyBlinkTest.hex
$ avrdude -F -V -c arduino -p ATMEGA328P -P /dev/ttyACM0 -b 115200 -U flash:w:MyBlinkTest.hex

…but it didn’t work. I found that I also needed to add “Arduino.h” to the top of MyBlinkTest.cpp. This is because the Arduino-specific commands such as digitalWrite etc. need to be defined and this header points to the functions where that happens. There were then a few places in the Arduino core code that I had to replace  instances of <Header.h> with "Header.h". The angle braces were telling the compiler to search system directories rather than the working directory.  Then it worked!

Clearly, this approach is limited. We would have to write out all the c files and object files explicitly, and copy files from the core directory and then remember all the commands. Not really feasible and very messy especially as the project starts to grow.

Enter make…  What it does is automate these commands by manipulating text and punching it into the command line for us. You can see the process in the commands above which I’m going to breakdown next and implement them in a makefile. Before I go on, it’s worth going over the basics of make here. A makefile consists of a number of ‘rules’ that each create a ‘target’ based on ‘prerequisites’ (dependencies) and commands. They look something like this:

target [target ...]: [component ...]
   Tab ↹[command 1]
           .
           .
           .
   Tab ↹[command n]

In make, variables are called ‘macros’ and we write them as…

MACRO=definition

and use them like this…

${MACRO}

This just means that whatever text we defined for that particular macro will be inserted where we specify.  So our hello world application would look like this…

MACRO1=hello
MACRO2=world!

test:
	@echo ${MACRO1} ${MACRO2}

Armed with this basic knowledge, let’s go through a makefile for an Arduino build – the classic blinking LED. You’ll find a copy of these files here if you’re interested in having a starting point for your own adventures with make.

Step 1: Compilation

First, we call the avr-gcc compiler, with a list of c and cpp files that we want to compile into object (.o) files. But in the makefile however,  you’ll see that I haven’t specified a file list however. I’ve said that the target will be a group of object files based on all .c and .cpp files in the DEPS macro combined with MyBlinkTest.cpp (blink.ino renamed). Using the VPATH macro, I’ve pointed the compiler to the Arduino core and variants directories. This is where all the important under-the-hood stuff is stored for building Arduino sketches. Note the additional -I flag in the call to avr-gcc which tells the compiler where to look for header files.  You’ll also see that I’ve sent all the object files to a build directory that is created if it doesn’t exist. Last but not least, there’s an important make idiom in the compiler call in the use of $^  which is an internal macro or automatic variable which expands to a space delimited string of prerequisites (‘implicit’ source): a list of all our .c and .cpp files.

VPATH=/usr/share/arduino/hardware/arduino/cores/arduino
VARIANTS=/usr/share/arduino/hardware/arduino/variants/standard
DEPS=${VPATH}/*.c ${VPATH}/*.cpp MyBlinkTest.cpp
BUILD_DIR=Build
CC=avr-gcc
MMCU=-mmcu=atmega328p
CFLAGS=-Os -DF_CPU=16000000UL ${MMCU}

${BUILD_DIR}/*.o: ${DEPS}
	mkdir -p Build/
	${CC} ${CFLAGS} -c $^ -I ${VARIANTS} -I ${VPATH}
	mv *.o ${BUILD_DIR}/

Step 2: Linking

The linking step simply takes all the object files that we’ve generated and bundles them together in one .elf file. Again, I’ve used the implicit source variable ($^) and the target variable $@  which substitutes in the name of the target which in this case is ${PROGRAM}.elf  and evaluates to MyBlinkTest.elf.

PROGRAM=MyBlinkTest

${PROGRAM}.elf: ${BUILD_DIR}/*.o
	${CC} ${MMCU} $^ -o ${BUILD_DIR}/$@

Step 3:  File conversion

OBJCOPY=avr-objcopy

${PROGRAM}.hex: ${BUILD_DIR}/${PROGRAM}.elf
	${OBJCOPY} -O ihex -R .eeprom $&lt; ${BUILD_DIR}/$@

Using the avr-objcopy command the .elf file is converted into a standardised .hex format. $<  is used here as it stands for the first prerequisite. There is only one so you get the idea.

Step 4: Upload

PORT=/dev/ttyACM0

upload: ${BUILD_DIR}/${PROGRAM}.hex
	avrdude -F -V -c arduino -p ATMEGA328P -P ${PORT} -b 115200 -U flash:w:${BUILD_DIR}/$&lt;

Heavy lifting done. Now it’s time to upload our beautiful code onto the Arduino with this last command which calls avrdude. Note that I’ve used $<  again to substitute in the name of the prerequisite and a macro to hold the name of the port.

Other things to note

  • Tabs are tabs in make! Don’t indent by four spaces and expect that to be equivalent. Make only understands tabs. You have been warned.
  • It’s worth defining the first rule (default target) as the one where all the important targets are specified. I’ve called this all.
  • You can also selectively just compile or upload only if you have defined rules for this by simply calling make compile or make upload from the command line respectively.
  • A clean rule is also a good idea. I’ve defined one here which just deletes everything in the build directory.
all: ${BUILD_DIR}/*.o ${PROGRAM}.elf ${PROGRAM}.hex upload

# option to compile only without upload/install
compile: ${BUILD_DIR}/*.o ${PROGRAM}.elf ${PROGRAM}.hex

upload: ${BUILD_DIR}/${PROGRAM}.hex
avrdude -F -V -c arduino -p ATMEGA328P -P ${PORT} -b 115200 -U flash:w:${BUILD_DIR}/$&lt;

clean:
rm -f ${BUILD_DIR}/*

Lifetester PCB

Below is a picture of the finished PCB after assembly. I was sensible here and minimised the number of surface mount components so the thing ended up quite big. But it worked exactly as the breadboard prototype. I was just looking to get something down in a more permanent form to share with the EPMM group in Sheffield. My thinking being that the sooner I  could share the better.

The first lifetester PCB. Two are shown side-by-side here. Note that the tape is covering up trimmer resistors that I’ve set and don’t want to change.

Unsurprisingly, as soon as I’d tested the board, I was aware of its deficiencies and I’m going to lay them bare for you here…

  1. Amplifier input offset voltage – Any offset at the input of the current sensing opamp is amplified by the large gain and needs and leads to a large offset voltage at the output. This means that all measurements (even 0V) are offset from zero. In this revision, I used an LM358 by TI which has a typical input offset voltage of 2mV which translates to 0.7V at the output. Solution: Use a low input offset (precision) OpAmp of course. This is an obvious strategy when you consider that we have lots of gain here and this is a DC circuit where offsets throw off our results.
  2. Reference voltage – Digital output from the (unipolar) ADC is calculated simply from the ratio of the input voltage to the reference voltage. Naturally, any fluctuation in the reference voltage will carry through to the ADC reading. Clearly, we only want to see changes in ADC digital readings that are caused by changes in the input voltage.  Solution: Use a voltage regulator. This one is used on the Arduino UNO board. You feed in 6.5 to 15V and get out 5V regulated to within 2%.
  3. ADC resolution – The resolution limits the smallest voltage that we can effectively “see”. Simply, with a 12-bit ADC, the digital reading will be 212 (Vin/Vref) whereas a 16-bit ADC scales its output as 216 (Vin/Vref) ie. on a reference voltage of 3.5V, a 12-bit and 16-bit ADC would read the voltage in steps of 0.85mV and 0.053mV respectively. This kind of accuracy is not essential right now but I noticed that a 2-channel 16-bit ADC with a programmable gain amplifier was not only smaller but cheaper than two separate 12-bit ICs.
  4. Gain accuracy – The gain of an inverting OpAmp is given by Vout = -Vin(Rin/Rf). The resistors used here have a tolerance of 10% and with two of them, the error in the gain is then 20%. To address this, I’ve included a trimmer resistor so that the gain can be calibrated after assembly but with lower tolerance resistors, say 1%, this might not be necessary.
  5. Charge pump – Op Amps here require dual supplies: plus and minus 5V. Which requires two power supplies to the board.  I discovered a device called a charge pump (eg. TL7660) which will take a 5V supply and output -5V. Very clever! Note that this doesn’t supply much current – output will drop to by 10% when supplying 10mA but this is more than enough for this application.
The inverting op amp (see point #4)

I’ve already started working on revision b which should address these issues. Watch this space! The design for this PCB can be found here.

How long do solar cells live? (part 3)

Finally, after much tinkering, I’ve got a system that’s worth committing to a PCB. Here is a shot of the prototype system being tested out…

A prototype breadboard lifetester being tested. Two solar cells are being held at MPP at the same time under the work-lamp. The arduino boards are used for PC interfacing and programming.

Above is a picture that I took as I was working on the system. At this point, two solar cells (under the work lamp) are illuminated and being driven at maximum power point (MPP) at the same time. As described previously, I used a current sensing circuit based on an inverting amplifier which is assembled on the long breadboard in the middle along with the DACs and ADCs needed to drive the circuit and collect data. On the neighbouring breadboard is a programmed ATMega328 chip which drives this process and is interfaced by I2C as a slave to another master ATMega328 on an Arduino UNO board. I needed another Arduino UNO board for programming the ATMega and for USB-Serial communication debugging when needed. There’s a neat article on this on the Arduino site here. Have a look at this schematic below for more detail of what I did exactly…

Schematic showing the layout of microcontroller and Arduino boards used in the picture above. Note that the analog circuit and SPI devices aren’t shown.

Unfortunately, the analog circuit that I was using was not quite doing the job. I noticed that although the output voltage from the DAC was as expected from the binary code that I was feeding into it, at the other end of the buffer amplifier (at the DUT terminal) it wasn’t. In particular, at Vin = 0V (short-circuit), the applied bias wasn’t 0V. It turns out that the buffer amplifier needs to work as a current sink in this case – current actually flows from ground to the buffer. To overcome this, in addition to +5V and 0V,  I also needed to supply -5V to the op-amp. To make sure that the output from the amplifier to the ADC, Vout, never went below 0V I used a precision rectifier circuit – it acts like an ideal diode; there’s no voltage drop at the output which is commonly associated with a regular diode. The simplified schematic is below and a full Fritzing file here.

Analog current sensing circuit used to drive the solar cells under test (DUT). The circuit is based on a precision rectifier/inverting amplifier. The range can be altered by changing Rsense.

Here’s what it does again in brief:

  1. Under illumination, current flows from ground to the buffer amplifier.
  2. Current flowing from ground to the buffer amplifier leads to a small (0 > Vx > -10mV) negative voltage across the sense resistor.
  3. This voltage is fed into an inverting op-amp. It is inverted and amplified 350 times. A precision rectifier arrangement ensures that the output can never go below 0V. Gain and offset can be tuned by means of trimmer resistors.
  4. The output is connected to an ADC for data logging and MPP tracking.

Below is some of the data that came out of this system…

Data measured from the prototype breadboard lifetime tester: live MPPT vs time (top panel) and the solar cell IV characteristic measured at the end of the test (bottom panel). Note that the MPP (DACx = 760) agrees well with the DAC setting during tracking.

The live MPPT data shows some fluctuation in voltage. Because of the hill climbing, perturb and observe algorithm used, the voltage is constantly being probed. You can also see a sharp step in the MPP data where I adjusted light intensity which is indicated by the increase in ADCx (current). Shortly afterwards (measurements are taken roughly every second), this is followed by DACx (applied voltage) as the MPPT system catches up which is expected. As a double check, I reset the lifetime tester to run another IV characteristic without changing the light intensity. This registered an MPP at DACx = 760 (0.38V) which was consistent with the MPP tracking data.

Having convinced myself that this system was working nicely, I decided it was time to design a PCB. More on that to come.

New job!

Sorry it’s been so long! After finishing my contract at Sheffield University, I was out of work for a couple of months and desperate to get a job sorted. To this end, I’ve been working hard to improve my knowledge of C and it seems to have worked! I was offered a job as a graduate embedded software engineer at Cambridge Medical Robotics three weeks ago. So far so good – They’re a really friendly bunch of talented people and I’m learning a lot!

A big thanks to my mate Jan for putting me forward for the job and inspiring me to go for it and thanks to Al Kelley and Ira Pohl for their book on C. If you’re thinking about a move into software, then my advice would be to get stuck in. My experience was really positive at all the interviews I attended and hard work is rewarded. Put in the time, really learn your stuff and it will happen.

A solar simulator on a budget

Light source

To get the high light intensities that I needed for this project, I hunted around for a high-performance high colour temperature white LED and came up with this one. It’s a Cree XLamp CXA2520 high lumen output and efficacy LED array. I chose the 5000K version as I wanted something that would be closer to sunlight. The device delivers 2500Lm white light at 36V and draws 0.5A. I liked the fact that it was a chip-on-board assembly that was ready to mount. I tried a smaller device but cracked it when I tried to mount it on a heatsink.

Heat considerations

We really need a heatsink here because, even though LEDs are efficient, there is still quite a lot of heat to get rid of – 20W if we assume that all electrical power is converted to heat (obviously this is the worst case scenario given that a significant amount of power should be converted into light and radiated away*). Keeping the temperature down increases the efficiency of the system and the lifetime of the LED. More importantly, we don’t want to alter the environment around our solar cell too much as this would bring in an uncontrolled variable.

I found a CPU fan/heatsink laying around and looked into bonding it using adhesive thermal tape. Assuming the thermal resistance of the fan/heatsink is 0.4K/W, and that the ambient temperature is 20C, then the heatsink will run at 28C – hopefully, the LED will be in equilibrium with this so will also be at the same temperature. I checked the specs of the heat transfer adhesive and it seems its performance is predicted to be really good. To be able to transfer 20W heat power, it would need a temperature difference of only 0.001mK across it – so the LED would be at pretty much the same temperature as the heatsink surface we can assume.

Cree XLamp CXA2520 mounted on a CPU heatsink/fan under operation at very low current. Note that the masking tape shown here was removed for final testing.

Power output

This is the most important part: the power output calculation. We need to know how much light the LED actually is going to deliver to our solar cell – in the lifetime tester application, I envisage that each solar cell under test will be assigned its own LED and this way the system would be truly modular.

On to the calculations then…What we want to know is the light intensity (irradiance) on the solar cell front surface which is simply the light power per unit area. For instance, 1 Sun illumination has an intensity equal to 1kW/m2 which is itself a unit of irradiance. Here’s how we work this out:

  • The thing is we want to know how much “real” power the LED emits in Watts. Basically, our eyes are setup to be sensitive to some wavelengths over others (the peak of the eye response happens to be tuned to the sun’s peak emission per nm which is green light at around 500nm – let’s not get drawn into a discussion about evolution here). Measuring the light output in Lumens tells us how bright the LED will be to our eyes but doesn’t tell us how much power there actually is. We need to convert units and to do this, we need to know what colour the light is. If you remember, I said that our eyes have a peak sensitivity to green light. So green light has the most number of Lumens per Watt, 683 Lm/W. Other wavelengths have less. This Lm/W number is referred to as luminous efficacy of radiation – it relates luminous to radiative flux and tells us...for a given amount of light energy, how much does this stimulate our eyes. Weird huh. Don’t get this confused with luminous efficacy of the source which is a measure of the overall efficiency of the LED in converting Watts of electrical input into Lumens of emitted light (126 Lm/W in this case). In fact, increasing luminous efficacy is one way to increase the LEDs apparent efficiency; if we made it green, it would be about twice as efficient.
  • But we don’t have a green monochromatic light source, we have a white light source? So we need to average the contribution from all the different wavelengths that make up the emitted spectrum from the LED. This gets a bit complicated. Fortunately, we can make some assumptions. Let’s assume that the spectrum of the LED approximates a blackbody that has been truncated to the visible region (normally a blackbody emitter would radiate light in the NIR and UV that we can’t see so the luminous efficacy would be much lower overall). So the luminous efficacy of radiation will be 350 Lm/W. From this, we know the total radiant power output from the LED will be 2500 / 350 = 7.1 W. We’re getting there.
  • The total radiant power is helpful but we need to know about intensity, or the number of Watts emitted over a given area. One way to go would be to assume that it’s distributed evenly over space but a better way is to assume that light emission follows Lambert’s cosine law; lambertian sources have the same brightness no matter at what angle you look at them even though their emission is not uniform. Let’s not get too drawn into the specifics here other than to say that the light intensity follows a cosine law with angle and LEDs are often approximated to lambertian emitters. So why break tradition? OK then we can now say that the peak intensity in the forward direction will be 7.1 W / π = 2.3 W / sr where sr stands for steradian (a unit of angle in 3D space. Imagine the surface of a sphere rather than the arc of a circle).
  • To get the power on the front surface of our solar cell then, we just need to know how many steradians it covers and multiply.  For a 2mm x 2mm (0.04 cm2) solar cell positioned 2 cm away from the LED (face on), I expect it to cover approximately 0.031 sr (using the formula for a cone with spherical cap) which would give us 71.3 mW incident flux and an intensity of 1781 mW / cm2 or 18 Suns! At a more reasonable distance of 5 cm, we would still have 3 Suns which would be plenty.

I’ve included the details of all these calculations in this sheet.

Mounting

When I mounted the LED, I was concerned about applying enough pressure to ensure a strong bond and good thermal contact. Here, they recommend pressures in excess of 100psi! I managed only 8psi. Basically, I was concerned about breaking the LED board. I had to rest a power supply on top of a toothpick box – it seemed to be just the right size to clear the LED optical surface which shouldn’t be touched. Everything was a bit unstable as you can see…

Mounting the LED onto a CPU heatsink/fan with thermal adhesive film. Pressure applied using a small open box with a power supply on top giving 8psi.
Testing out the high power LED at 34V. Note that my power supply could only deliver 31V so I had to wire a couple of C (1.5V) batteries in series with it to get up to a more suitable voltage. You can see the meter is reading a current of 0.223A rather than the recommended 0.5A.

Driving circuit

I wired up a constant current LED driver to drive the LED with a potentiometer to control brightness (see schematics below). You can see from the chart that the output scales linearly with the voltage input to the dimmer pin – at 0V, you get the maximum output and at 4.2-4.3V the output has dropped right down to 0%.

The layout of the LED driver circuit based on the RECOM constant current LED driver unit. Output power can be controlled using the potentiometer which varies voltage supplied to the dimmer input. The output current as a function of the dimmer control voltage is also shown.

This appeared to work well when I tested it out. It got fairly bright as I adjusted the dimmer voltage which you can see from the image above however, I don’t have a way of actually measuring this at present. What I need is a calibrated meter. Unfortunately, this is outside the price range of the shed right now but I intend to do this when I visit the labs in Sheffield again.

Mismatch factor

An important figure of merit when it comes to benchmarking solar simulators is the concept of mismatch factor. It’s basically a score that your light source gets on how well it represents the solar spectrum. To work it out, we need to sum up the power in wavelength intervals over the visible and near infra-red portions of the electromagnetic spectrum for the sun (reference) and simulator (LED). Have a look at this figure below…

Calculating spectral mismatch factor: LED vs solar spectrum. Upper panel: relative spectral irradiance (area normalised) for the sun (red line) and our LED (black line). Lower panel: a table of integrated intensity over specified wavelength interval with mismatch factor (rightmost column).

Hopefully, you can see straight away that there’s a big difference in the shape of the two spectra. They have been area normalised – remember that the area under the spectra is the total power from the two sources. If we divide by area under the entire spectrum, then we’re effectively setting them to the same power for comparison which is what you would do when testing a solar cell. To get the mismatch, we then sum up the areas under the spectra between the intervals shown and compare (see table). You can see that the LED has a lot of its output in the visible range (400-700 nm) and none in the NIR compared to the sun.

To qualify as a class A solar simulator, the ratio (last column in the table) of all ranges needs to stay within 0.75 – 1.25 – we’re way off! Unfortunately, for this LED, the ratio even goes outside the allowed limits for class C (0.4 – 2.0). We need some NIR component to the spectrum to fix this which is possible. For the purposes of lifetime testing on a  budget however, then I think we need to accept these limitations. It’s good to know what they are though.