Piradigm – Approach to Piwars challenges 2018

12th Nov 2017

Quick progress update: chassis wired and driving in remote control mode!

 

So now I’ve got the basic functionality working, I can start on programming for the challenges. In principle I could compete in three of the challenges right now, as they’re intended to be entered with manual control. As I mentioned last time though, I’m hoping to attempt them autonomously, as well as the mandatory autonomous challenges. Here’s how I’m intending to approach them all:
Somewhere Over the Rainbow (driving to coloured balls in sequence)
I’m hoping to use OpenCV’s contour detection for this challenge, to spot shapes which can then be filtered by size, shape and colour, to identify the coloured balls, then use the size of the ball and possibly the height of the arena to help judge distance to the ball. I have a few ideas to help make the image processing quicker and more reliable – have the camera at the same height as the balls and prioritise looking where the balls should be based on their expected height and angle relative to the start position.

 

Minimal maze
To help with many of the challenges, I’m intending to use external markers (like QR codes) to help track the robot’s position and orientation on the course. This year the minimal maze allows markers to be used, so I’m intending to put an ARuco marker on each corner, with the hope that a wide angle lens will always be able to see one marker, giving me position on the course at all times. I’ll preprogram waypoints for each corner of the track and use the markers to help navigate to them in sequence.

Hitchin Hackspace built a copy of the minimal maze last year. its the same this year, with different colours for each wall

Straight line speed challenge
Like the maze, I’m intending to put a marker at the far end of the course and just drive towards it. Once the marker reaches a certain size, I’ll know I’m at the end of the course and can stop. This is the first challenge I’m going to attempt to program. If I get all the others done and working reasonably reliably, I may come back and try to do it without a marker, just using the colour of the edges of the course as guidance.

 

Duck shoot (target shooting)
This challenge is intended to be shooting targets by manually steering the robot. I’m hoping to do autonomous aiming through object detection. I’ve picked a gun (blog post coming up on that) and got it to be electronically actuated, so I “just” need to find the targets and aim. I’m hoping the targets will be rectangles of a consistent colour, or at least something easily identifiable using opencv, but that’s up to the organisors and course builders to let me know. I know roughly the size, position and distance to the targets, so I may be able to use that to narrow down which detected shapes are targets.

 

Pi noon (1 v 1 balloon popping)
This is going to be tricky! I’m again intending to put markers at the corners of the arena so I can judge position. After that, I can either drive around randomly or look for coloured round things (balloons) and drive towards them. Hopefully after I’ve got the code for Over the Rainbow and Minimal Maze challenges working, this one should be more than halfway there. I think spotting balloons may be tricky though.
Slightly deranged golf
The golf challenge is a little like pi noon, in that there’s a round object on a course that’s going to be difficult to catch. I’m going to attempt it the same way, programming a waypoint for the hole and looking for round white objects that might be balls. Very tricky.

Golf Course from last year, note the steep hill at the start that caused the ball to roll into a hazard

Obstacle course
Again, by having markers on the corners of the course and programming waypoints, like the minimal maze, I’m hoping to autonomously navigate the course. The extra 3D-ness of the obstacle course will make this more difficult, as the markers may not always be visible. The main difficulty will be the turntable though. I may need to put a marker on the turntable or some other trick. I’m leaving this challenge until last, as its so difficult.

 

Obviously its still early days so these plans may change as I get into and find image processing is too challenging for some reason, but hopefully I can complete at least some of the challenges using the camera.

next week: design and build notes

 

Reminder: please vote for Hitchin Hackspace on the Aviva site here:  Hitchin Hackspace Community Workshop  We are very close to getting funding to enable our Hackspace, we just need a few more votes.

Pi wars 2018

29th Oct 2017

Its Pi Wars time again! If you were following along the last few years, you’ll know that Hitchin Hackspace has previously entered the Raspberry Pi powered robot competition with some innovative and challenging robot designs, sometimes with great success, often with stress and very late nights towards the end. This time we’re doing things a little different, On the one hand there’s the A team of Dave, Martin, Brian, Mike and Paul; taking the typical Hitchin approach and on the other hand, there’s, well, me. I’m being politely referred to as the rebel, or more frequently Team Defectorcon.
Why the split? I want to take a different approach; a somewhat risky strategy that’s unlikely to be competitive, and I knew the rest of the team would prefer something more along the lines of what they’ve done before.
So what’s the difference? Hitchin Hackspace typically goes for highly optimised designs, with attachments specifically tailored for each challenge, attached to a high performance drivetrain and using carefully selected sensors. I’m going to be using a robot kit as the basis of my entry, and my only sensor is going to be a camera. I’m hoping to use off the shelf items wherever possible even if they may be a little compromised. In addition to that, I’m going to attempt to enter *all* challenges autonomously, even those intended to be driven using remote control. By starting with a kit, I’m hoping that I can get a moving chassis very early on, so I can get maximum time to develop and test the software.

Progress has been very good so far. I decided to use Coretec robotics ‘Tiny’ chassis very early on, as its a nice, popular platform that other pi wars competitors will recognize, its not that expensive and its not super fancy (important as I want this to be a relatively budget build and show others what can be done with simple and cheap mechanics). I saw them at the competition last ear and was impressed how well such a small chassis coped with the obstacle course. In fact in many places it had an easier time than its larger competitors.

The tiny kit comes with the basics of a moving chassis: wheels, motors, chassis (including camera mount) and motor controller. At a minimum, that leaves me to select the Raspberry Pi board I’m going to use, along with the battery, voltage regulator, camera and LEDs. All piwars bots need extra LEDS :-). Taking those in order:

Pi board: This was an easy one. Since I want to use computer vision for the challenges, its inevitably going to be computationally intensive, so I wanted the fastest board I could: Pi 3 B.

Battery: As above, due to the processing power (and therefore power consumption) and learning from previous years, I knew I wanted the highest capacity battery I could fit in. Lithium polymer batteries offer the most energy density and the best value at this scale, so that decides the battery technology. The stock Tiny kit motors are rated for about 6V, but I expected they’d cope with being slightly over-volted for a bit more speed, so I went with a 2 cell pack (7.4V nominal). Hobbyking do a vast array of lipo batteries and I knew from previous projects the quality is ok, so I used their battery chooser to find the highest capacity pack that would fit within the kit chassis – 2.2Ah, that will do nicely : – )

 

Voltage regulator: Hobbyking also do a bunch of regulators (often called a BEC in radio control circles, a Battery Eliminator Circuit, i.e. it eliminates the separate pack that used to be used to power an RC receiver). I picked a 5A version so it was definitely adequate for the burst current a Pi may need.

Camera: I went for the Raspberry pi branded camera as I knew it would *just work*. I’ve also bought a clip on wide angle lens to hopefully reduce the amount of turning needed to spot things. The quality of the wide angle lens isn’t great though, I may have been better getting a Sainsmart Pi Camera as it already has a built in, well matched wide angle lens and is a bit cheaper.

LEDs: most Piwars bots have some flashy RGB LEDs on them for decoration. I also wanted to have LEDs but, after a suggestion from Rob Berwick, I’m going to try to use the lighting for something functional. One of the challenges with computer vision is variable lighting conditions, particularly shadows and colour changes. Sharp shadows can create edges and shapes that the detection algorithms can mistake for the actual target. By having very high power white lights, I’m hoping I can balance out the sun, reducing the affect of shadows. Beating the sun requires *a lot* of light though, about 15watts by my calculations (1000lux over about 1m^2 requires 10-20watts). Sparkfun do a pack of 5 3 watt LEDs, so I’m going to setup an array of wide beam headlights.

 

So those components cover the basics. I also wanted a screen on the bot for debugging, so went with a fancy touch screen from Adafruit, so I can also drive menus. Unfortunately, after buying the screen, I realised It had created a bit of a challenge. The screen uses about 8 of the Pi’s GPIO pins, and many of them clashed with the pins of the explorer motor control board that comes with the tiny kit. that controller also doesn’t work well when stacked under a screen, and it can only drive the motors at 5V (from the Pi), not battery voltage. I went looking for another motor controller
but couldn’t find one that meet all my needs, particularly the need to stack. The best I could find is a Picon zero from 4tronix, but that doesn’t stack, so I needed to find a way to get the neccesary control signals to it. Luckily the Adafruit screen includes a second set of pins on the underside, duplicating the full GPIO array, so I’m planning to use a ribbon cable to connect them to the motor controller. Getting this to fit turned out to be a much bigger headache than I’d expected.
With my component selection covered, I’m going to leave it there for this week, other than to say a CAD model of the current design is now on Grabcad here: https://grabcad.com/library/pi-radigm-v1-piwars-robot-1

I’ve done a wiring diagram with Fritzing:

 

 

 

 

 

 

 

 

 

And the build currently looks like this:

 

Next week I’m hoping to get the wiring finished, some basic software going so that its driving, and I’ll talk about how I’m planning on attacking each Piwars challenge.

Reminder: please vote for Hitchin Hackspace on the Aviva site here:  Hitchin Hackspace Community Workshop  We are very close to getting funding to enable our Hackspace, we just need a few more votes.

Thanks

Mark

*Thanks to Pimoroni and Coretec robotics for letting me share CAD of their parts.