For this project, I created a solar tracker to spin a solar panel towards the sun. Previously I had the panel propped up on the ground facing the south, but I thought it’d be more interesting if it could follow the sun all day.
Parts and basic construction
I purchased a 100W Solar Panel along with a pole mount. My father-in-law had a pole and two ball-bearing mounts that I used to allow the panel to spin while mounted to the post. They basically look like this, although I haven’t tested that one specifically. I also purchased a 12v worm gear motor which has a built in rotary encoder for position tracking. I already had a 12v battery and charger set up to power the greenhouse’s exhaust fan.
I drilled out a hole in the bottom of the post for the motor shaft to fit into, as well as another hole in the side of the shaft for a locking screw to lock the shaft in place.
To test things out, I simply attached the motor to a vice on my desk. I wired up a 12v power supply for the motor, and used a voltage regular to drop down to 5v for the other components.
I used a TB6612 motor driver from AdaFruit to control the motor. Since it’s only rated at 1.2A per channel, and the motor says it can draw 2.4A, I bridged the two channels together (AIN1<->BIN1, AIN2<->BIN2, PWMA<->PWMB).
I used an ESP8266 to control everything. It has built-in WiFi, which is nice for pushing updates, getting feedback, and getting the current time to calculate the position of the sun.
I programmed the ESP with the Arduino workbench. The basic idea is to use the motor encoder to track the direction of the panel.
The motor I used is geared 522:1. That is, 522 spins of the motor are needed to turn the output shaft once. This provides a lot of torque and stops the mast from spinning freely while not powered. The encoders each emit 11 pules (ticks) per rotation, and there are two of them. Some simple math lets us determine pules per degree:
I used the RotaryEncoder library for Arduino to keep track of the motor’s current position. It handles an interrupt each time a pulse comes in from the motor’s encoder, and just counts up or down depending on the direction of spin.
Since the encoder is not perfect, and no state is maintained after a power loss, we need a way to calibrate to a known position. To do this, I left a stop screw sticking out of the mast, preventing the motor from spinning past a certain point. The motor will spin backwards until the stop screw hits the wood. The ESP will detect that the motor is no longer making progress by tracking the rotary encoder, and will consider that the “zero” position.
Tracking the sun
Originally, I wanted to use photoresistors on each side of the panel to determine where to rotate the panel, but this had a couple of issues:
The ESP8266 only has one analog pin (solvable, we could multiplex the pin)
Photoresistors burn out if they’re left in the sun for too long
Instead, I used the SolarPosition Arduino library, as well as an Arduino NTPClient library. With these two libraries, and a known latitude and longitude, it’s easy to get the current solar azimuth.
So that it doesn’t depend on the Internet, I set up the Raspberry Pi in the greenhouse as an access point, and made it a stratum 1 NTP server using a GPS module. The ESP connects to the Pi that is controlling my exhaust fan over WiFi, and uses its NTP service to get the current time.
Making an enclosure
Next, I designed an enclosure in FreeCAD to house the motor and the electronics. I made a simple case that the motor can mount to, and a small board to house the electronics.
Next I 3D printed all of the pieces and assembled it:
For the final touches, I coated the case with a spray on rubber insulator to prevent water from getting inside. After mounting everything and connecting power, the calibrated and spun into position:
Once the sun is below the horizon, the panel re-calibrates, leaving it facing east until the next morning.
My father-in-law purchased a relatively inexpensive DVR camera system. He wanted to use is as an artificial window, by mounting a TV to the wall inside of a window frame. The problem is that there’s an annoying overlay in the corner, ruining the atmosphere.
He had contacted the company, but was told that it couldn’t be disabled, and so he asked me to take a look.
The system consists of a base station, which connects to the television. It also came with four wireless cameras. After opening the case and inspecting the board, I found a serial port, which I could use to communicate with the device.
I connected to the device over Ethernet, and looked around the web interface for camera options. I found that I could get rid of the “CAM1” part of the overlay by setting the camera’s name to be blank, but I couldn’t find anything to control the time and date portion. The only other interesting thing I found was a page to install a firmware update.
Getting the firmware
I searched online for a firmware update, and did end up finding one. The source web page was a little sketchy, but I examined it and found that it had a Linux kernel, a squashfs file system, and some other data. It did look correct, so I could have tried to make modifications to the update and upload it to the device. However, I was worried that if something went wrong, I’d have no way to recover. It’d be better to get access to the device itself, but I noted it as a possible option.
The next step was to wire up a serial connection to see if there was anything useful. I used a BusPirate, which is a USB device capable of speaking to several different bus types, and connected it to the serial pins:
Powering on the device, I immediately saw a uBoot prompt. uBoot is a bootloader often used on embedded devices. By hitting the any key, I was able to stop the device from it’s normal startup and get to the bootloader prompt.
From the uBoot shell, I was able to load the entire contents of the flash chip into RAM, and use TFTP to copy it to my computer.
Analyzing the firmware
With the firmware file saved on my computer, I used binwalk to see what was inside of it. It turned out to be very similar to the firmware update I found online.
There’s a lot going on, but the squashfs sections, which contain the Linux filesystem, are where I suspected all of the interesting bits would be. There are two for some reason, but I started with the first.
To isolate the squashfs part of the image, I used dd.
Next, I unpacked it with unsquashfs. Finally, I was looking at the contents of the system!
Getting access to the live system
My initial goal was to get remote access while the system was running normally. The system had the telnetd binary installed, I just had to get it started up when the system boots. In the startup script /etc/init.d/S99, I noticed that it had been turned off.
Turning it back on was easy enough, but I’d still need a password. I could have simply overwritten the contents of /etc/passwd, but first I figured I’d search online to see if anyone else had already cracked it.
It turns out that password is somewhat common; a Google search tells me that the password is “j1/_7sxw“. No need to overwrite it, since I know what it is.
Finally, I had to re-package my modifications and get them back onto the DVR. The first step was to repackage the squashfs file.
I then had to put a proper uImage header on the squashfs image via the mkimage command, and flashed it to the device:
The offsets used are taken from binwalk. I transferred the modified image over TFTP into memory on the device, erased the original squashfs image, wrote the new data to flash, and rebooted it.
When it came back up, this time I was able to telnet to it. The credentials worked, and I now had remote access to the live system.
Pivoting to the cameras
The DVR had a second network interface, an internal-only network for the wireless cameras. It was easy to find the network name and password, and I probably could have connected with a regular PC at this point. Instead, I just used the DVR to hop to the cameras.
Using the arp command, I found a list of devices on the “internal” network. I only had one camera powered on at the time, and that was 172.20.14.33. I tried using telnet to get to the camera, using the same credentials as before, and they worked.
It turns out telnet was left enabled on the cameras, no hackery required! One interesting thing to note, the cameras themselves have Ethernet ports. If you can physically plug an Ethernet cable into the camera, you can skip all of the work I’d done up until this point, plug in a PC, and telnet to it that way. I still needed access to the DVR, anyway, for reasons that you’ll see later.
Looking at the camera’s filesystem, it has a single JFFS2 mount that caught my eye.
In that folder were configuration options:
A quick glance through onvif_cfg.ini shows it has the settings we want to change.
I changed name_position and time_position both to 0, disabling them. Since this file is on a writable partition, I simply saved the file and rebooted the camera. No uBoot flashing required.
That was it! The text is gone. For good measure, I copied out the camera’s filesystem to my PC using netcat with “tar cfp – /tmp | nc -w3 192.168.1.99 1234” on the device and “nc -l -p 1234 | tar xvfp –” on my PC. I didn’t end up actually needing anything from it, but I wanted to get everything out in case I wanted to make any future changes.
At this point, I thought I was done. I even started repackaging the cameras, until I noticed this:
There’s a WiFi signal strength overlay in the upper right corner! Unfortunately, this one wasn’t going to be a simple configuration change. I determined that the DVR system itself adds the icon, not the camera. I located the relevant images on the DVR to try and get rid of them.
The first thing I tried was to replace the files with transparent images, and flashed the new image. The system rendered them as white boxes, which made it worse. Next I tried simply deleting the files. This caused the DVR system to crash. Alright, we’re doing it the hard way.
What’s rendering it?
The first step was to figure out what was rendering the WiFi signal graphics. To do this, I did a simple “grep” to for the filename:
The dvr_gui binary references the filename. Looking at the list of running processes confirms that it’s running. What this tells me is that the path to the image was hard-coded into the binary, and we’re going to need to patch it out.
I loaded dvr_gui in Ghidra, which is a software reverse-engineering framework developed by the NSA. Searching for the strings, I found this function that loads all of the WiFi images:
Next, I Googled for some error messages that I saw in the binary, like “1-bpp rect fill not yet implemented“. They turned out to be from SDL, an open-source library that I’m familiar with. This allowed me to make my first function label, SDL_SetError.
With SDL_SetError labeled, I could match error strings to the SDL source code, which allowed me to label SDL functions in the binary. The above image, for example, is code from SDL_FillRect. I went on a spree of labeling functions, creating structures, and setting proper arguments for SDL related code.
Now I knew exactly what was happening with those BMP files. They were opened with SDL_LoadBMP_RW and stored into an array. I tracked down where they were used, and found where it eventually calls SDL_UpperBlit to render to the screen. In the code below, FUN_0094034 is being passed one of the WiFi images, based on the signal strength, and then renders it.
Patching the binary
The next step was to get rid of it. I replaced the four bytes that execute the above function call with 0xE320F000, which is a NOP (no operation) for ARM processors. The result is that instead of calling the function that would render the image to the screen, it just moves down to the next instruction.
After replacing the binary in the squashfs image with my patched one, re-flashing the DVR, and booting it back up, the signal strength icon was gone!
Looking back at the WiFi images, the black part seems to get rendered as transparent. I think I could have replaced the WiFi images with all-black images. Oh well.
This ended up being a fun/small reverse engineering project, and there are other areas that I’d maybe like to explore someday, mainly security related.
It’s noteworthy that an attacker could plug Ethernet in to a camera, telnet to it using factory-set credentials, and find the WiFi password. It’d be interesting to try and tamper with the video feed. I’m also curious if I could exploit the DVR to get remote access without having to physically flash.
For now, though, the camera system is performing the job it was purchased for:
In part one, we put together the basic hardware for our solar-powered, Pi-controlled, greenhouse fan. This time, we’ll look at how to actually control the fan with Python on a Raspberry Pi.
Preparing the Raspberry Pi
There are plenty of resources online about getting up and running with a Pi, and that’s not really my objective here, but I’ll describe the high-level steps. I used the Raspbian Lite image, though other distributions can work as well. Installing it is just a matter of writing the image to an SD card, adding an empty ‘ssh’ file to the boot partition to turn on SSH, connecting it to Ethernet, and SSH’ing in. Once you’re connected, you can configure WiFi and unplug the cable.
The Motor Controller
Using the motor controller itself is pretty simple. The Pi controls it through pins on the yellow connector, it’s powered through the green connector, and it connects to the motor with the black connector.
The yellow connector has 5 pins: a 5V power supply (the Pi documentation says that we more than the 0.5a it can provide, so we can’t use it to power our Pi), three I/O pins, and a ground. The I/O pins are control the state of the motor as described in the following table:
By controlling the power supplied to IN1 and IN2, we can switch between braking, dangling (spinning freely), forwards, and backwards. For example, if IN1 and IN2 both have power, the motor is off, but if we turn on IN1 and turn off IN2, the fan will spin forward. The PWM pin, described more below, is used to control the speed, if used. If it’s not used, it can just be directly connected to power so that it’s effectively always set to full-speed.
Controlling the Fan State
The Raspberry Pi has a number of GPIO (general purpose input/output) pins. Essentially, each pin can be set to either input, where we can read if there is power on the pin, or output, where we can supply power on the pin (or not). GPIO on the Pi works at 3.3 volts, so you’ll need to make sure any devices you want to use can accept that voltage. Some require 5 volts, for example. The motor controller we’re using operates from 3-5 volts, so we’re good!
First, we’ll control the motor without speed control. I connected the motor controller’s IN1 and IN2 terminals to GPIO 12 and 16, respectively. Here is some Python using the wiringpi library to control the fan speed. Running it will cause the fan to spin forward at full speed.
fwd_pin = 12 # IN1
bwd_pin = 16 # IN2
Make the motor spin forwards
Make the motor spin backwards
Turn of the motor (dangling)
Brake the motor
# Initialize the wiringPi library
# Set both of our pins into output mode
# Turn on the fan
Speed Control with PWM
Pulse-width modulation, briefly, controls the percentage of time that the pin is on, called the “duty cycle”. If the pin is high half the time, the duty cycle is 50%. The rate of these pulses is the frequency. You can picture this as a square wave. In this case, you don’t really need to know too much about PWM to use it.
The Raspberry Pi’s hardware can generate a PWM signal on GPIO 18, so that’s what we’ll use. We’ll connect GPIO 18 to the PWM input on the motor controller, and modify the above code. The motor controller’s manual suggests a 20khz signal, so we’ll aim for that. Here’s a snippet of code showing the PWM setup on the Pi:
pwm_pin = 18 # GPIO 18 supports hardware PWM
clock = 4 # Must be at least 2
max_range = 240 # 19200000 / 4 / 240 = 20000 (20khz)
# Just for verification
# It's not actually required that we calculate this.
freq = 19200000 / clock / max_range
# Configure PWM on our pin
# Actually start the PWM signal at 50%
duty = 120
The formula for calculating PWM frequency on the Pi is 19200000 / clock / range. The ‘range’ supplied is the maximum value that can supplied to pwmWrite(), so with the above code we can set the duty anywhere from 0 to 240. We’ve hardcoded the value 120, which will run the fan at 50%.
Here’s a more complete version, one that allows us to specify a percentage on the command line:
fwd_pin = 12
bwdPin = 16
pwmPin = 18
clock = 4 # Must be at least 2
max_range = 240 # 20khz
# Sanity checks
if clock < 2:
print("Clock must be at least 2")
if len(sys.argv) < 2:
print("Usage: %s <Duty Percent>" % sys.argv)
# Our control functions
Make the motor spin forwards
Make the motor spin backwards
Turn of the motor (dangling)
Brake the motor
# Setup wiringpi and GPIO
# Convert the requested percentage to
# a value between 0 and 'range' (240)
duty = int(max_range * (int(sys.argv) / 100))
# If 0 is given, just turn it off.
if duty == 0:
# Configure PWM
# Enable power
Next time, we’ll add a temperature sensors to control the fan automatically.
In this project, I’m using a solar panel to charge a 12v Marine/RV battery, which will power a Raspberry Pi and an exhaust fan for an off-the-grid greenhouse. My first goal was just to have an exhaust fan that keeps the temperature in the 80-85°F range, but it’d also be cool to add a web interface for climate and power stats, automatically control lights, water plants, etc. First, we’ll just start off with the exhaust fan.
The first thing we needed was a fan. I installed a 20″ DC Snap-Fan with the help of my father-in-law. It operates on 12 to 24 volts, with more speed as the voltage increases. To test it out, we just hooked it up to a small timer and a 12 volt battery. (The fan blades look strange in the photo because it’s running.) With the timer in place, the fan would stupidly run during the daytime, no matter the temperature. Time to make it smarter!
An enclosure. This is just a nice box to house all of our electrical components.
A Raspberry Pi to act as the brains. Mainly we need it for WiFi and GPIO (general purpose I/O), but they also have USB, HDMI, a GPU, and they’re fairly cheap.
A MPPT solar charge controller to charge the battery. This connects to a solar panel and a 12v RV battery, and keeps the battery charged. I like this one because it has an RS-485 port, so we can talk to it from the Pi.
A DC-DC step down converter. Our battery is 12 volts, and the Raspberry Pi needs 5 volts through a micro USB plug. This will bring the voltage down and provides the micro USB plug needed by the Pi.
A pi-ezconnect hat. This just sits on top of the Pi, and breaks out all of the pins into easy to use wire terminals. Not completely necessary, but nicer than just sticking all of the wires into a breadboard.
A motor controller board. This board lets us easily control the fan from the Pi. We’ll talk more about how to use it in the next post.
Above is my assembled product. The solar charger terminals are connected to the battery, and the load terminals are connected to the fuse panel (through an inline fuse on the positive wire.) The fuse panel is connected to both the DC-DC step down converter (which powers the Pi) and the motor controller. The Pi and the motor controller are both mounted to the wood using nylon stand offs. You can also see a USB to RS-485 adapter plugged in to the Pi, but ignore that. I ended up using something else, and I’ll explain it later.
Next, we mounted it to the actual greenhouse wall, under the fan:
I’ll talk more about the specific components, and how to make them work, in part 2.
For my first HackRF project, I thought I’d try to create a replacement for the remote controls used by my two Hampton Bay ceiling fans. First, we’ll have to understand how the remote communicates with the fan, which is what we’ll do in this post. These remotes let you control the light level and fan speed. The other features, like timers and temperature-based fan speed, are all processed on the remote itself; it just tells the fan the speed and light level to use. There are four dip switches under the batteries that let you “pair” the remote with an individual fan.
To find the remote’s transmit frequency, I looked up its FCCID on the FCC’s website, which told me it uses 303.85Mhz. With that knowledge, I used hackrf_transfer to record a sample of the remote telling the fan to go to “high” with the lights turned off. The command I used is as follows:
After it was running, I pressed the button on the remote to turn the fan on, waited a moment, and stopped the recording. I opened the file in baudline as a “raw” file, with a sample rate of 8000000, 2 channels, quadrature and flip complex checked, using a an 8-bit signed decode format. Some older blog posts will claim that the data is unsigned, but that was changed in more recent firmware updates.
Looking at the signal in baudline’s waterfall view, you can see some kind of simple modulation going on. The transmission is broken up into short (~300 microsecond) and long (~600 microsecond) pulses, with a ~300 microsecond pause between each pulse. It looks a lot like binary to me. I assumed (correctly) that a short line was a 0, and a long line was a 1. It’s not shown below, but the remote actually transmits this pattern several times, with a delay between each packet.
Now that we know that the signal looks like, I made a quick flowgraph in gnuradio companion for analysis. I didn’t want to perform the above steps, reopening the file in baudline every time I wanted to try a different setting on the remote. The graph listens on the remote’s frequency, and then uses the AM Demod block and a WX GUI Scope Sink to show the signal.
With the flowgraph running, I can see the waveform immediately, and examine the individual bits after each button press. You’ll need to turn off autorange, and increase your seconds/counts per division until something useful appears. After pressing a button on the remote, pause the scope to see the packet.
Next, I made small configuration changes on the remote, just to see which bits would change for a particular setting. By analyzing the remote in a lot of different states and recording my findings, I was able to determine where each setting was in the packet, and how to interpret it.
The preamble is consistent for every single packet, so no big mystery there. The dip switch bits correspond directly to the physical switches, albeit reversed: the left-most switch controls the right-most bit. The lowest light level the remote would transmit was a 22 (010110), and the highest was a 62 (111110). A 63 (111111) turns off the light completely. The two fan speed bits represent low (00), medium (01), high (10), and off (11). Two of the bits are mysteriously always set to 1 in every combination that I’ve tried, so we’ll just ignore those. The checksum was easy to recognize: it varied wildly with different settings, but was consistent when the same settings were used.
So now we know basically how to talk to the fan. To actually send a packet, we’ll need to know how to compute the checksum, but more on that next time. In the next post, we’ll build a class in C++ to generate the actual packet structure.