Hack Hitchin

Pi Wars week 5: The Canyons of Mars

29th Oct 2018

The new maze course has been revealed for Pi Wars:

The actual course being revealed is a little of a surprise, as the challenge description initially said the design would be a surprise, encouraging either a generic approach (like wall following) or a ‘maze solving’ strategy. Now the actual course is known, a more specific and optimised solution will probably be faster and more reliable.

Last year we looked at different approaches to the course, to see which might give the fastest or safest route. This year we’re expecting to have a better handle on where we are on the course (using the encoders and IMU, as well as distance sensors), so that may open up some faster and more sophisticated lines. We thought it was worthwhile to sketch the options out and see how the theoretical times compare, to see how worthwhile it might be to do the development.

First up we have the simplest planned route, one that’s relatively easy to program using only information from two or three distance sensors and is fairly safe: the straight, centre line course, turning on the spot in the corners:

This is the strategy we started with last year (see test video here: https://youtu.be/EV7YIHr5feg?t=518) Like in previous years, we’ve also developed a simple model of how fast we expect our robot to accelerate:

 

 

 

Previously this approach has been fairly accurate at predicting times in the straightline speed challenge, so we’re fairly confident in using it to estimate performance for the maze.

Using this model, we can combine the acceleration profile and the distance of each straight line segment of the approach above and estimate the transit times. Adding on a time to turn 90 degrees at each corner (we’re assuming 0.1 seconds, as that’s about what Piradigm needed to turn 90) gives a predicted total time of 5.3 seconds to complete the maze. If you’d attended previous Pi wars, you may think this was a ridiculous prediction, as most competitors took  20-30 seconds. This was mainly because they take a very tentative approach to the maze. Last year in practice Piradigm could fairly repeatably achieve 10second times whilst not running “full throttle”: (https://youtu.be/EV7YIHr5feg?t=541) and we did achieve one ~6 second time, so its certainly possible.

Can we do any better than the straight line route though? As you may have seen in the video linked above, a slight variation is to do smooth turns for the corners instead of stopping and turning on the spot:

 

Last year we found this to be much faster and no more risky. Predicted time: 4.3 seconds, a handy saving. This is assuming each corner is taken at 1.6m/s, which is at approximately the limit of traction of the tyres. Hopefully the IMU will allow us to stably and repeatably skid a little in the corners, otherwise we’d have to go a little slower to retain control.

So that’s good, but can we do any better? It still doesn’t look much like a ‘racing line’ as you’d see in motor sport. If we know where we are on the course at all times, can we corner faster? or tighter?  We can approximate something like a racing line by increasing the turn radius (but keeping it a constant radius) and clipping the apexes on key corners:

(note this isn’t a true racing line, usually drivers won’t drive at a constant radius through all corners, it will be a different parabola-like shape with a late apex, starting accelerating before finishing the corner.)

With the larger radius turns, we think they could be taken at more like 2.3m/s, and the distance travelled is a fair bit shorter than the second course, so the predicted time plummets to 3.2 seconds!

That is a nice target number but in reality we wouldn’t plan a route taking us so close to the sides, we’re unlikely to be perfectly positioned and hitting a wall can disorientate the robot and end the run. How much longer would it take if we take a racing line with a safety margin? say 75mm clearance?

The ‘safer’ racing line is ~0.3m longer and the slightly tighter corners mean going a little slower at 2m/s, but the predicted time is still only 3.6 seconds.

That’s all great, but the above predictions were assuming no downforce. As we discussed in our week 3 update, we’re intending to fit a vortex downforce generator. So how much faster can we go with that?

For the ‘safer’ racing line we’re predicting a 3.1 second time, and the faster, riskier line a bonkers 2.6 seconds.

 

Some crazy numbers, lets hope the software and electronics can give us the control needed to get the hardware to deliver the times its capable of.

Pi Wars week 4: schematics

23rd Oct 2018

Not much progress to show again this week. We’ve again been researching and trying to get our heads around Kalman filters, and have been learning a new software package to design our chassis/pcb.

In every one of our previous Pi Wars entries, we’ve had issues with loose wires causing erratic behaviour at some point. We’ve often wondered if having a pcb made to eliminate  much of the wiring would be easier and more reliable. So this year we’re going to try to have as few cables and connectors as possible and mount most components directly to a pcb.

We’ve designed a few small pcbs in the past, but we’ve never been happy with the pcb design software. This week we’ve been learning Diptrace, and now have the beginnings of a schematic to show for it:

We still need to add some components, like the drive motor controllers and connectors for components that won’t be mounted directly to the pcb (like the IMU, batteries, motors etc), then we can move onto routing the actual pcb design

In other news, the cheap encoders have arrived, but we’ve not had chance to test them yet.

 

 

 

 

 

Pi Wars week 3: Vortex generator

16th Oct 2018

Pi Wars has many challenges where fast acceleration and cornering are important for the fastest times. Certainly last year Piradigm, whilst it wasn’t the most powerful robot, when running well it was still traction limited in the Minimal Maze and Over The Rainbow Challenges. This year the Straight-ish Line speed test may also require good cornering and the Pi Noon challenge always favours good drivers (or code!) even more if the chassis has good handling. We’re hopeful that our software and hardware this year will be capable enough that extra traction would increase performance. To that end, we’ve started testing a novel ‘vortex generator’ style of generating downforce:

Before we explain the weird design,  some background:

Formula one cars corner much faster than normal cars due to their aerodynamics: as they move along,  they use wings to deflect the air upwards, pushing them into the ground, increasing grip without increasing weight. This works great if you have the power and speed to do that. unfortunately most Pi Wars challenges are completed at less than 5mph, so the wings would need to be huge to have any effect.

In another, more closely related analogy, Micromouse robots need to accelerate and corner quickly to solve their mazes in the fastest time. As designs have developed, the winning teams in this competition now all use fans to generate downforce. The mice have a flexible skirt under their chassis, much like hovercraft have, but these are arranged so the fan sucks the air out, creating a low pressure even when the mouse isn’t moving, sucking the robots to the ground. This works well since their course is very flat and smooth, so the skirt has almost no leaks. Check out their incredible performance in this video from a competition this year:

Inspired by those, and the small toys that can run on ceilings, I first included a downforce generator in one of my projects in my entry for power tool drag racing:

This design was a little different to the above mechanisms, it used a vortex suction generator, which is a vaned, high speed spinning bowl that spins the air rather than sucking it, to generate the required low pressure with lower power consumption and less reliance on a good seal with the ground. The theory is that because the air is spinning, there must be a pressure gradient to keep the air going in a circle. Since the outside of the bowl is at atmospheric pressure, the centre must be at a much lower pressure, sucking the bowl down.

For Pi Wars we’re hoping to use a similar design but on a smaller scale, and only if the rest of the system is fast enough to benefit from it. So far we have 3d printed the above CAD:

And done some spin up tests in a test rig:

In this test, we held the rotor above a metal toolbox, that was supported by some scales. We were hoping to both test if the rotor could survive the very high speeds required and,  if it did, what level of downforce we could generate (measured by the lift or reduction in weight of the toolbox). For the test we were stood well back, with a full face shield on in case the worst happened.

From the test video, you can see it was a successful test: the rotor survived spinning up to ~14000rpm and generated over 900grams of downforce, despite having over 5mm of ground clearance! For comparison, from the latest CAD model we have of the overall robot design, we’re expecting the all up weight to be about 800grams. Which means we should be able to corner at up to 2g, if we have sufficient control.

 

On the software side, we’ve been further researching kalman filters and how we might be able to fuse encoder data with the data from the IMU to give us the best possible positional information, and we’ve also had a few more components arrive:

multiplexer, ToF sensors, IMU

 

Pi wars week 2: encoders and magnetometers

8th Oct 2018

It’s been a productive week for the Tauradigm team, we’ve tested a prototype projectile launcher, researched sensors for odometry, mapped out the system architecture, evaluated a magnetometer board and order some of the key components.

 

Projectile Launcher

First the projectile launcher. Since the concept design (posted last week) is quite similar to Hitchin’s old skittles ball flinger, I thought it was worthwhile to just use that for a test, to check it was feasible to use with the soft juggling balls. I 3d printed a holder to mount the flywheels closer together, so they’d grip the balls:

modified ball flinger and juggling ball. notice the scuff marks on the ball from the tyres

The result is just about ok. It could do with the balls having a bit more energy, so they fly rather than bounce towards the targets. I’m not sure if its because the flywheel tyres are soft, as well as the balls, so they’e not getting much grip, or because the flywheels are too far apart, or because there’s not enough energy stored in the flywheels.

Still, a promising result and worth further development, since the balls knocked over the books and there’s no way the elastic bands from last year would have managed that.

 

Odometry sensors

Odometry is the fancy word for sensors that tell you how far you’ve traveled. The classic example is wheel (or shaft) encoders where markers on the wheels or motors are counted, to allow the travel distance to be estimated (its an estimate as the wheels may slip).  There are now also optical flow based sensors, where a series of images are taken, key reference points identified in each frame, and the travel distance worked out from there. This can eliminate the wheel slip error, but introduces other potential errors such as when when the camera to ground distance varies, the perceived speed changes.

As mentioned previously, we ideally want an encoder that doesn’t interfere with our magentometer (no long range magnetic fields), is fast (better than 100Hz), precise (better than 1mm resolution when used on a 50mm wheel, so more than 100counts per rev) and isn’t sensitive to dirt, contamination and sunlight.

Shaft encoders are split into a few variants:

 

Optical flow

optical flow solutions fit into two categories: general purpose cameras with specialist software analysis, and dedicated hardware based solutions, often based on optical mice. The mouse chips tend to be cheaper and faster.

so lots of options available, none ideal. We’re going to start with the cheapest and see what we learn.

 

System Architecture

For most challenges, we’re aiming to use map based navigation, and rely on ‘external’ sensors as little as possible (after the issues with light interference last year). To do this we’re planning to have a microcontroller very rapidly reading simple sensors like the IMU (including the magnetometer) and encoders to keeps track of where it thinks it has traveled and adjust the motors accordingly, aiming for  a given waypoint, whilst keeping the pi up to date with what its doing and where it thinks it is.

The Pi will keep track of the location on a representative map, use key features to update the perceived location on the map and give the microcontroller new waypoints to head for. Localisation (figuring out where we are on the map) and route planning will be separate modules. See this page: https://atsushisakai.github.io/PythonRobotics/ for a collection of algorithms written in python that achieve this, along with animations representing each one’s approach. Our Pi will use ToF sensors and image processing to detect obstacles and use them as inputs for correcting the perceived location.

 

Magnetometer Evaluation

Since we already had a gy-271 magnetometer (HCM5883L breakout board), we thought it was worth doing a few tests to see if the magnetic field of the motors would interfere with the results.

The HCM5883L is a 3axis magnetometer, so doesn’t know its orientation to gravity or its turn rate like an accelerometer or gyroscope does, but by measuring the local magnetic field the results can be used to calculate the current orientation.

We first did a bench top test with a single motor and the board. we found that at about 150mm distance, the motors could cause a +/-2degree heading error. At 100mm it was +/-20degrees and at 50mm it caused more than 45degrees error. A few people on twitter suggested using materials with a high magnetic permeability as shielding for  the motors.  We managed to borrow some flexible magnetic keeper material (like Giron?), so we tried that:

Interestingly, in most orientations this gave no improvement, and in some orientations it was worse than no shielding. We think the material may have been concentrating the motors magnetic field into a better magnetic shape or directing more towards the magnetometer, or maybe the earths field was also affected by the material. We couldn’t entirely engulf the motor as we need the power wires to exit one end and the gearbox output shaft to exit the other. If anyone can suggest a more suitable material or arrangement, please let us know in the comments. Good magnetic shielding materials seem to be very specialist and expensive, and will likely add more weight than a plastic sensor tower, so for now we’re going with that.

Here you can see we’ve mounted a wooden stick to the front of our Tiny test chassis. Since the explorer phat doesn’t have pass through headers, we initially used a PiFace solderless expansion shim to breakout the GPIO pins, but we found some of the pins weren’t getting a connection, so had to revert to using jumpers to wire everything up (explorer phat floating in the air on wires, instead of sitting on the pi). After a bit of calibration, we got the Tiny to drive towards North fairly successfully:

From this test, its clear we need other sensors to help us figure out how far off course each little deviation takes us, so we don’t just turn back to the correct heading (parallel to the original path but offset), but we can turn back onto the desired path. We also need to find a better calibration routine, the few libraries we found still left some errors, North was ok but South was closer to South by South East. Here’s an example of one of the early calibration test results:

This is after some scaling and conversion, so the units are a little meaningless, but it shows the magnetic field strength in X and Y as the sensor was rotated. Since the earth’s field is constant, this should result in all the points being an equal distance from the origin, clearly this is offset to the right and downwards. The scaling looks ok as both axis vary by the same amount. After we changed the offsets we got much better results.

 

Parts are arriving

We’ve started ordering the parts we’re confident we will need, like a Pi, Pi camera and motor controllers:

So lots of progress. Hopefully next week we’ll evaluate a few more sensors and develop the Tiny test bed further.

Pi wars 2019 – it begins!

30th Sep 2018

First the big news: Hitchin’s A and B teams both got in to Pi Wars 2019!

Before the excitement fades, the feeling of worry sets in, how much work have we just committed to? Can we get it done on time this year? Better get cracking!

Mixed progress has been made on Tauradigm this week.  We’ve fleshed out the CAD design a little more, estimating what components we need for all the challenges, and checking how they’ll fit:

initial layout

Pi 3b+ in the centre, batteries either side, motor controllers behind them, with a Teensy at the rear (to quickly track the encoders and IMU). The field of view of the distance sensors (mounted on the underside of the main board) are represented by the cones pointing outwards. The grey sketched rectangles represent the IMU (gyro, accelerometer and magnetometer, for figuring out the robots orientation and movement) and a multiplexer, so we can speak to multiple sensors easily.

At this point it was looking fine.

rear encoder

We’d even included a model of the encoders we were planning to use. It was at this point we realised a couple of issues. Firstly, the motors we’d been hoarding (high speed, high power 12mm motors, bought really cheap off ebay in a pack of ten) didn’t have rear motor shafts, so those encoders wouldn’t fit. Secondly, would the encoder magnet interfere with our magnetometer? Come to think of it, would the magnets in the motors interfere? Some quick research suggested they would, many people have had issues with getting magnetometers to work reliably. it looks like we might have to move the IMU as far from the motors as possible. like on the top of a small tower! Ok, if that’s what we need to do…

 

layout

pi noon setup

Next was a quick mock up of the Pi Noon arrangement, with the camera angled up to see the pin and opponents balloons (an issue we had last year where the balloon disappeared from view just at the critical moment!) and you can see the tower for the IMU. We’ve also added the 5v regulator and the barrel jack for running from a power supply. Looking ok. but space is getting tight and we still need to sort out a solution for the encoders.

ball flinger attachment

 

Next was a drawing of the firing mechanism for space invaders (target shooting). This design is based closely on Hitchin’s previous ball launcher (for skittles, http://hackhitchin.org.uk/finalstraight/) but this time firing soft juggling balls. There’s been some discussion with the pi wars organisers about whether a speed or energy limit might apply, so this might need to be revised as it requires quite a lot of energy to work properly, even though the speed isn’t that high (probably slower than a nerf dart). We’re also considering vacuum cannons 🙂

Drawing up the launcher highlighted an issue with the camera and IMU mount we had for pi noon above, so that’s going to need a rethink.

 

For the encoders, it looks like we could go magnetic or optical on the output shaft, or optical mouse sensors looking at the floor. The magentic sensor is probably the most reliable but has a very low resolution and might interfere more with the magnetometer, the optical sensor may be a little big and may be sensitive to dirt, and the mouse sensors are sensitive to distance from the ground (and might need to be too close to be practical).

magnetic encoder on the output shaft

 

We’re planning a separate post on sensor selection, as it can be challenging and confusing.

 

That’s it for this week. hopefully we can soon start ordering parts to test.

 

Pi Wars 2019 – The entries are in

24th Sep 2018

Its that time of year again, where Hitchin Hackspace get slowly obsessed with Pi powered robots, in the build up to Pi Wars. Hitchin have again applied with two entries, this is post will be focused on the entry for Tauradigm, another fully autonomous entry and successor to last years Piradigm entry.

Key bits from the application form:

Team/Robot name: Tauradigm
Team category: Advanced or Professional
Team details: Mark Mellors: Professional Mechanical Engineer, lead on the mechanical and electrical side of the robot. Previously did majority of work (across all disciplines) on last year’s (disastrous/’ambitious’) Piradigm.
Rob Berwick: Professional Computerman, lead on the software side. Previously an adviser and supporter on Piradigm
Both Mark and Rob have also contributed to all of Hitchin Hackspaces Piwars entries, from main team members in ‘16 to occasional advisers in ‘18.
Proposed robot details: Fully autonomous in all challenges, obviously
4 wheel drive, lego/custom wheels tyres as default configuration
Custom chassis using unusual materials/construction techniques 🙂
Stylish, lightweight, UFO themed bodywork
Home made pcbs for power and signal distribution
Arrays of various sensors, including: internal (encoders and inertial measurement), proximity (distance sensors, reflective) and vision (pi camera), possibly supported by offboard sensors (beacons, ‘eye in the sky’)
Software:
Separated architectural units
Fast IMU/encoder feedback with continuous location estimation, much like actual spacecraft (so build a virtual map then use that for navigation, with occasional corrections when known markers are detected)
Advanced route planning algorithms
Automated hardware self testAttachments for challenges
Space invaders will have a much more powerful soft projectile system with vision based auto aiming supported by encoders and distance sensors
Spirit of Curiosity may use one or two homing beacons or visual markers
Navigate by the Stars and Blast Off may use a rocket (jet, no flames or pyrotechnics!) propulsion module
The Hubble Telescope Nebula Challenge and Pi Noon: The Right Stuff will use vision again, supported by distance sensors and encoders (plus be more tolerant of changing lighting conditions this time!)
The Apollo 13 Obstacle Course will be attempted autonomously without on course markers this year, using a range of sensors
Why do you want to enter Pi Wars?: Retribution! To show that full autonomous is possible! But also, as before, to inspire, share and stretch ourselves.
Which challenges are you expecting to take part in?: Remote-controlled challenges, Autonomous challenges, Blogging (before the event)
Any more information: Piradigm would like to attend as an exhibitor this year

The entry only went in last week (sorry Mike and Tim for leaving it late!) yet we’ve already realised we might need to make some changes to the robot design.

Initial draft CAD rendering:

Render of Tauradigm

Render of Tauradigm

Fingers crossed we get accepted!

We should find out in the next week

Piradigm – Approach to Piwars challenges 2018

12th Nov 2017

Quick progress update: chassis wired and driving in remote control mode!

 

So now I’ve got the basic functionality working, I can start on programming for the challenges. In principle I could compete in three of the challenges right now, as they’re intended to be entered with manual control. As I mentioned last time though, I’m hoping to attempt them autonomously, as well as the mandatory autonomous challenges. Here’s how I’m intending to approach them all:
Somewhere Over the Rainbow (driving to coloured balls in sequence)
I’m hoping to use OpenCV’s contour detection for this challenge, to spot shapes which can then be filtered by size, shape and colour, to identify the coloured balls, then use the size of the ball and possibly the height of the arena to help judge distance to the ball. I have a few ideas to help make the image processing quicker and more reliable – have the camera at the same height as the balls and prioritise looking where the balls should be based on their expected height and angle relative to the start position.

 

Minimal maze
To help with many of the challenges, I’m intending to use external markers (like QR codes) to help track the robot’s position and orientation on the course. This year the minimal maze allows markers to be used, so I’m intending to put an ARuco marker on each corner, with the hope that a wide angle lens will always be able to see one marker, giving me position on the course at all times. I’ll preprogram waypoints for each corner of the track and use the markers to help navigate to them in sequence.

Hitchin Hackspace built a copy of the minimal maze last year. its the same this year, with different colours for each wall

Straight line speed challenge
Like the maze, I’m intending to put a marker at the far end of the course and just drive towards it. Once the marker reaches a certain size, I’ll know I’m at the end of the course and can stop. This is the first challenge I’m going to attempt to program. If I get all the others done and working reasonably reliably, I may come back and try to do it without a marker, just using the colour of the edges of the course as guidance.

 

Duck shoot (target shooting)
This challenge is intended to be shooting targets by manually steering the robot. I’m hoping to do autonomous aiming through object detection. I’ve picked a gun (blog post coming up on that) and got it to be electronically actuated, so I “just” need to find the targets and aim. I’m hoping the targets will be rectangles of a consistent colour, or at least something easily identifiable using opencv, but that’s up to the organisors and course builders to let me know. I know roughly the size, position and distance to the targets, so I may be able to use that to narrow down which detected shapes are targets.

 

Pi noon (1 v 1 balloon popping)
This is going to be tricky! I’m again intending to put markers at the corners of the arena so I can judge position. After that, I can either drive around randomly or look for coloured round things (balloons) and drive towards them. Hopefully after I’ve got the code for Over the Rainbow and Minimal Maze challenges working, this one should be more than halfway there. I think spotting balloons may be tricky though.
Slightly deranged golf
The golf challenge is a little like pi noon, in that there’s a round object on a course that’s going to be difficult to catch. I’m going to attempt it the same way, programming a waypoint for the hole and looking for round white objects that might be balls. Very tricky.

Golf Course from last year, note the steep hill at the start that caused the ball to roll into a hazard

Obstacle course
Again, by having markers on the corners of the course and programming waypoints, like the minimal maze, I’m hoping to autonomously navigate the course. The extra 3D-ness of the obstacle course will make this more difficult, as the markers may not always be visible. The main difficulty will be the turntable though. I may need to put a marker on the turntable or some other trick. I’m leaving this challenge until last, as its so difficult.

 

Obviously its still early days so these plans may change as I get into and find image processing is too challenging for some reason, but hopefully I can complete at least some of the challenges using the camera.

next week: design and build notes

 

Reminder: please vote for Hitchin Hackspace on the Aviva site here:  Hitchin Hackspace Community Workshop  We are very close to getting funding to enable our Hackspace, we just need a few more votes.

Pi wars 2018

29th Oct 2017

Its Pi Wars time again! If you were following along the last few years, you’ll know that Hitchin Hackspace has previously entered the Raspberry Pi powered robot competition with some innovative and challenging robot designs, sometimes with great success, often with stress and very late nights towards the end. This time we’re doing things a little different, On the one hand there’s the A team of Dave, Martin, Brian, Mike and Paul; taking the typical Hitchin approach and on the other hand, there’s, well, me. I’m being politely referred to as the rebel, or more frequently Team Defectorcon.
Why the split? I want to take a different approach; a somewhat risky strategy that’s unlikely to be competitive, and I knew the rest of the team would prefer something more along the lines of what they’ve done before.
So what’s the difference? Hitchin Hackspace typically goes for highly optimised designs, with attachments specifically tailored for each challenge, attached to a high performance drivetrain and using carefully selected sensors. I’m going to be using a robot kit as the basis of my entry, and my only sensor is going to be a camera. I’m hoping to use off the shelf items wherever possible even if they may be a little compromised. In addition to that, I’m going to attempt to enter *all* challenges autonomously, even those intended to be driven using remote control. By starting with a kit, I’m hoping that I can get a moving chassis very early on, so I can get maximum time to develop and test the software.

Progress has been very good so far. I decided to use Coretec robotics ‘Tiny’ chassis very early on, as its a nice, popular platform that other pi wars competitors will recognize, its not that expensive and its not super fancy (important as I want this to be a relatively budget build and show others what can be done with simple and cheap mechanics). I saw them at the competition last ear and was impressed how well such a small chassis coped with the obstacle course. In fact in many places it had an easier time than its larger competitors.

The tiny kit comes with the basics of a moving chassis: wheels, motors, chassis (including camera mount) and motor controller. At a minimum, that leaves me to select the Raspberry Pi board I’m going to use, along with the battery, voltage regulator, camera and LEDs. All piwars bots need extra LEDS :-). Taking those in order:

Pi board: This was an easy one. Since I want to use computer vision for the challenges, its inevitably going to be computationally intensive, so I wanted the fastest board I could: Pi 3 B.

Battery: As above, due to the processing power (and therefore power consumption) and learning from previous years, I knew I wanted the highest capacity battery I could fit in. Lithium polymer batteries offer the most energy density and the best value at this scale, so that decides the battery technology. The stock Tiny kit motors are rated for about 6V, but I expected they’d cope with being slightly over-volted for a bit more speed, so I went with a 2 cell pack (7.4V nominal). Hobbyking do a vast array of lipo batteries and I knew from previous projects the quality is ok, so I used their battery chooser to find the highest capacity pack that would fit within the kit chassis – 2.2Ah, that will do nicely : – )

 

Voltage regulator: Hobbyking also do a bunch of regulators (often called a BEC in radio control circles, a Battery Eliminator Circuit, i.e. it eliminates the separate pack that used to be used to power an RC receiver). I picked a 5A version so it was definitely adequate for the burst current a Pi may need.

Camera: I went for the Raspberry pi branded camera as I knew it would *just work*. I’ve also bought a clip on wide angle lens to hopefully reduce the amount of turning needed to spot things. The quality of the wide angle lens isn’t great though, I may have been better getting a Sainsmart Pi Camera as it already has a built in, well matched wide angle lens and is a bit cheaper.

LEDs: most Piwars bots have some flashy RGB LEDs on them for decoration. I also wanted to have LEDs but, after a suggestion from Rob Berwick, I’m going to try to use the lighting for something functional. One of the challenges with computer vision is variable lighting conditions, particularly shadows and colour changes. Sharp shadows can create edges and shapes that the detection algorithms can mistake for the actual target. By having very high power white lights, I’m hoping I can balance out the sun, reducing the affect of shadows. Beating the sun requires *a lot* of light though, about 15watts by my calculations (1000lux over about 1m^2 requires 10-20watts). Sparkfun do a pack of 5 3 watt LEDs, so I’m going to setup an array of wide beam headlights.

 

So those components cover the basics. I also wanted a screen on the bot for debugging, so went with a fancy touch screen from Adafruit, so I can also drive menus. Unfortunately, after buying the screen, I realised It had created a bit of a challenge. The screen uses about 8 of the Pi’s GPIO pins, and many of them clashed with the pins of the explorer motor control board that comes with the tiny kit. that controller also doesn’t work well when stacked under a screen, and it can only drive the motors at 5V (from the Pi), not battery voltage. I went looking for another motor controller
but couldn’t find one that meet all my needs, particularly the need to stack. The best I could find is a Picon zero from 4tronix, but that doesn’t stack, so I needed to find a way to get the neccesary control signals to it. Luckily the Adafruit screen includes a second set of pins on the underside, duplicating the full GPIO array, so I’m planning to use a ribbon cable to connect them to the motor controller. Getting this to fit turned out to be a much bigger headache than I’d expected.
With my component selection covered, I’m going to leave it there for this week, other than to say a CAD model of the current design is now on Grabcad here: https://grabcad.com/library/pi-radigm-v1-piwars-robot-1

I’ve done a wiring diagram with Fritzing:

 

 

 

 

 

 

 

 

 

And the build currently looks like this:

 

Next week I’m hoping to get the wiring finished, some basic software going so that its driving, and I’ll talk about how I’m planning on attacking each Piwars challenge.

Reminder: please vote for Hitchin Hackspace on the Aviva site here:  Hitchin Hackspace Community Workshop  We are very close to getting funding to enable our Hackspace, we just need a few more votes.

Thanks

Mark

*Thanks to Pimoroni and Coretec robotics for letting me share CAD of their parts.

Monthly recap and dates for you diary

2nd Apr 2017

The last couple of months have been very busy for the members of Hitchin Hackspace. Not only were members meeting twice weekly to finish our robot for Pi Wars, but we have also (after an arduous 2.5 years) been given access to the abandoned toilet block on Bancroft.


 

We’ve already spent many weekends there, tidying, repairing and planning what will be our very own bricks and mortar Hackspace and though we’re still a way off being ready to move in permanently, we’re eager to get there. These are exciting times.

 


 
 
On the 14th of March we celebrated our 5th Birthday. What started as three strangers meeting in The Vic for the first time has turned into a most excellent community with many members who love to make and have fun doing it as a group. We had a good turn out of members old and new, and there was even the traditional Hackspace birthday cheesecake.

 

 


 

The Pi Wars team have spent today at the Cambridge Computer Laboratory for the first day of competition – school teams. They’ve been having a lot of fun meeting the other teams and even had a VIP guest bighak commander – Dr Lucy Rogers, a judge on BBC Robot Wars and the head judge for Pi Wars.

Our team will be competing tomorrow and we are all keen to find out how well they they do.

 

Dates for your diary

Our weekly Build Nights for April:

  • 3rd
  • 10th
  • 17th
  • 24th

These events will be held in the pavilion on Ransom’s Rec between 7:30pm-10:00pm. Everyone is welcome to come along and see what our group get up to. You can bring a project along if you’d like, or just pop in for a chat. New people are always welcome and we enjoy seeing what projects they may bring.

Our social is on the 11th. This will be held, as always, in The Victoria at the end of Bancroft. This is an informal event and again, everybody is welcome. We start arriving from 7:30pm and it usually goes on until 11:00pm.

We hope you can make it to some of these events and if you know anyone that might be interested, bring them along. The more the merrier!

 

 

Piwars 3.0 Deadline Looms

28th Mar 2017

The final few days are here!

Obviously all competitors will have completely finished their robots and will have dialed in the settings to perfectly tune each task to run a peak performance….yer right.

Last year we ran so close to the deadline that we were not just tweaking challenge code on the competition day, we were completely writing some challenges from scratch. This year we made a conscious effort to start early so that wouldn’t be required. So far that hasn’t gone exactly to plan. We started early, for sure, but got bogged down in the details and minutia that little things like getting a robot driving early on and thus allow for driving practice seemed to fall by the wayside.

In fairness, last year we went all out in the last 2 weeks. So much so that it caused a few “issues” with home lives, requiring the use of many, many ‘browny points’. This year has been a definite improvement. So far we have managed to not need the last mad dash effort, but it’s going to be close still.

We have made the strip board version of the breadboard electronics. It’s pretty simple as it is mostly just I2C pins with a small amount of magic components that make our lovely laser sensors work beautifully. The soldered board being significantly smaller than the breadboard, which will make the bot a little neater.

We’ve been playing with LED’s. This was one simple test to make TITO2 “shine” above the rest. As it’s not part of the basic challenge, we’ve left it a little late. However, Brian has come up with some “bright” ideas…really sorry, could’t resist.

Over the last few days/hours we have got the line following sensor working and mostly coded up to work. Big improvement from last year as we wrote that challenge off near the end. It’s taken quite some research to get a sensor that can read the laminated sample provided. With some testing, playing, research and purchasing power, we now have a sensor board that, although doesn’t work directly with the Raspberry Pi, does via an arduino nano. It’s a bit annoying as if it wasn’t for this, the only real electronic device other than ESC’s was the Pi Zero W. But, it’s only on the sensor mount, and the sensor was a bit of a pain to get information/code samples from the manufacturer that I’m still classing it as a win. I should stress, although the sensor picks up the printed line on the vinyl print, the readings are much closer to white than the black electrical tape readings. It may still prove in tougher testing to be less reliable.

Anyway, I hope you all are progressing well. There maybe one or two more posts before the deadline, depending on time, energy and sleep requirements.

🙂