The Jetson Nano Developer kit – B01 is a small computer comprising of an NVIDA Maxwell GPU, Quad-Core ARM Cortex-A57 Processor and 4GB of Memory along with four USB 3 ports, Gigabit Ethernet, HDMI and Display Port output, main storage is on a MicroSD card and there is a variety of selection of expansion available via GPIO, I2C and UART. On the software side NVIDIA provide their JetPack SDK – a customised version of Ubuntu. This development kit has been produced to provide an entry point into Machine Learning, for which I will be using Python programming language. I got my board from Pimoroni
These notes cover my process of setting one up and links to the documentation, it not intended to repeat those install guides but to provide an install sequence and any additional commentary as needed. I’m going to assume you have a little experience of using the terminal and am familiar with using the bash command line – I’ve no idea how this would be done through the GUI.
I followed the instructions for downloading and installing JetPack 4.4 on https://nvidia.com/jetsonnano-start I used a 64GB Class 10, UHS-I, U3, V30 SanDisk card. I formatted the card in a camera before using balenaEtcher to write the JetPack SDK image, this creates a partition of about 16GB on the card formatted to ext4, during installation the volume is resized to fill the card.
Despite using a good quality USB power supply with an output of 3 Amps at 5 Volts into the Micro USB port the computer would only boot long enough for the NVIDIA logo to appear on screen but after a few seconds the green power LED would go out and it would be off, the same happened when I tried a variety of USB power supplies used. I got round the problem buy using a 5 Amp power supply connected to the barrel jack J25 on the left (centre pin positive) and connecting the jumper J48 located just behind this connector.
Customising the Setup
There are a couple of thigs to do, get a network volume mounted and set the default version of python.
For the network share install samba, some network utilities and the nano text editor:
$sudo apt install samba cifs-utils nano
create a text file: sudo nano /etc/samaba/videoserver with the following:
username=<your network username>
password=<your network password>
And set the permissions sudo chmod 600 /etc/samba/videoserver. In this example I have a network share on my server; 192.168.1.30, called video. Create a mount point for the share: sudo mkdir /mnt/video now you need to edit fstab, sudo nano /etc/fstab and add your network connection to the end:
Reload fstab with sudo mount -a and check for any errors. Because of the way that Jetpack boots it does not appear to wait for the network so the share needs to be set to automount and this causes it to only appear in drive listings when accessed. Further reading can be found in this excellent guide to fstab: https://wiki.archlinux.org/index.php/fstab.
Jetpack 4.4 comes with two versions of Python, 2.7 and 3.6, I want it to default to 3.6 and while this is rather out of date I don’t want to go down the hole of upgrading just yet, you will also need to install pip and set pip3 as the default too.
I did get an error later on, a crash was reported on the desktop when an occasional python 2 script ran. I fixed the error in /usr/sbin/l4t_payload_updater_t210 by changing the first line of the file from !#/user/bin/python to !#/user/bin/python2
Post Install Problems
A recent update occured, so I did the usual sudo apt get update && sudo apt get upgrade but one of the files gave a script error, this turned out to be with nvidia-l4t-bootloader, like so:
The Micro SD Card I use in a Raspberry PI ran out of space so here is how I copied the contents of the 15GB drive to a new 64GB card and resized the partition. I used a separate computer running Debian and as the machine does not have a monitor or keyboard attached this will be being completed through the bash command line using SSH.
1. Making a copy of the SD Card.
Insert the old card into your computer, if the computer attempts to mount the drive then unmount it. We need to find which mount point has been used, do this with the lsblk command:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
The device sde matches our SD card. So we will use that. The dd command is used to create the ISO image, I am creating the file in my home directory:
The copied partition is now the same size as the original. If you have space remaining, the new card can be put back in the Pi and use the raspi-config utility, and using the Expand Filesystem option in the Advanced Settings section. However if the drive is completely full you won’t be able to login as there won’t be enough space available for the temporary files created at login, to get round this you can use parted to resize, start with:
if you get the following message:
Warning:Unable toopen/dev/sde read-write(Read-only file system)./dev/sde has been opened read-only.
then quit from parted, if you are using a full size SD Card, check the Write Protect tab on the side of the card and try again, otherwise try:
setting readonly to0(off)
If problem persists try formatting the new card in a camera, as these have a simple file system, and write the ISO image again.
We need to resize the larger partition /dev/sde2 with the ext4 file system, the smaller is used to boot the Pi and can be ignored. In parted, list the partitions with the print command:
Number Start EndSize Type File system Flags
14194kB50.1MB45.9MBprimary fat32 lba
Using resizepart set the new size, I set this to larger than the 15GB but smaller than the unallocated space, this to save me having to accurately work out the remaining space manually:
Now update the boundaries to grow and resize the partition into the freshly allocated space:
Now boot the Pi with the new card, login and use the raspi-config utility then in Advanced Options choose Expand Filesystem and follow the onscreen instructions. Once rebooted you should now be set to fill up your new card.
While extracting the telemetry data from the GoPro is reasonably well documented I have found some gaps for getting the extracting utilities installed and when extracting and combining data from multiple files. These notes are for a Debian/Ubuntu installation in a BASH Shell.
Installing the gopro-utils
As I couldn’t find any straightforward instructions for installation, I’ll be going through everything I needed to do to get it working, you may have some of these packages installed already.
sudo apt update
sudo apt upgrade
sudo apt install ffmpeg golang gpsbabel git
Now to get the gopro-utils and install them, I’m placing the source files into my Downloads directory, as well as the GPS data extractor we’ll be adding the other telemetry tools too, this is all a bit long winded.
You can see that what we are wanting is on stream 3, as far as I can tell this stays the same every time, I don’t know if it is different for other GoPro models.
This bash script extracts the GPS data in GPX format from all the GoPro GX recordings in the directory, other options have been commented out, if you are using Garmin VIRB edit there is also an option for use with that. The script creates two files, one that contains the raw data and another with the desired GPS data, the GPS output file has the same name as the recording, but in lowercase with a .gpx extension.
#gopro2gpx -i "$BINFILE" -a 200 -f 3 -o "$OUTFILE-virb.gpx"
#gopro2json -i "$BINFILE" -o "$OUTFILE.json"
Merging GPX files
As the GoPro splits recordings into 4GB blocks, when extracting you will get a single GPX file for each recording. Many pages found by Google say that to create a single track from these all you need to do is append the files into one big file. This is wrong, what you end up with is a single file with many short tracks, when what you are after is one long track covering the entire journey. This bash script uses gpsbabel to create single merged file from the extracted GPX data, it creates a file called “gpsoutput.gpx”.
Inkscape is a free vector graphics editor for all major platforms, generally it is aimed at art and design users but it does have an option for generating G-Code for use in your favourite CNC software. While Inkscape doesn’t have many of the functions of proper CAD/CAM software it is an relatively easy place to start for creating basic designs, I have been using it to make boxes out of 3.5mm plywood.
These notes are based around my cheap CNC machine sold as an CNC3018 by a variety of Chinese manufacturers on Amazon and eBay, the included controller is a Woodpecker CNC board (Ardunio clone) I have upgraded to GRBL v1.1 and I am using version 0.92.4 (April 2019) of Inkscape with the included Gcodetools.
This post focuses on setting up Inkscape for the CNC machine and producing the g-code from your drawing, it is not intended to be an Inkscape tutorial.
With a new drawing set your Document Size, this should be the same as your CNC bed, in my case this is 300 x 180mm. From the Inkscape menu go to File > Properties and in the Page Tab set the Display Units (millimeters in my case), the Orientation to Landscape and Page Size width: 300 and height: 180. In the Grids Tab set the Grid Units to mm and the Spacing X and Spacing Y to 1.0. Back on your main page, turn the page grid on with: View > Page Grid.
By default Inkscape scales the stroke/line width when you resize a shape, to prevent this click the the fourth box from the right in the top icon bar “when scaling objects, scale the stroke width by the same proportion”
You can save this as a template, such as: CNC3018.svg or as the document default with: default.svg by saving the file to your templates directory:
On Linux and OS X: ~/.config/inkscape/templates/
On Windows: C:\Users\<username>\AppData\Roaming\inkscape\templates
The lines you draw will need to be the same width as the bit you are using in the CNC machine. Draw a rectangle, Right mouse click on the rectangle and select Fill and Stroke…. In the Fill Tab click the X – no paint box and on the Stroke Style tab set the width to that of the bit you are using – 1.5mm, subsequent rectangles will be in the same style, other shapes will need to be setup this way too. The colour of your lines should be black, there is some functionality for different colours to represent different depths but I have not yet worked out how to do this.
Layout Tips for G-Code Routing
Remember to check the dimensions of the cuts, with an outside cut such as the width and height of a box side you need to measure for the inside of your rectangle, for holes in your box measure to the outside edge, Inkscape sets distances to the outside edge.
For positioning holes for switches and the like, I add thin lines 0.1mm thick as guides and make use of the width/height settings as well as the Object > Align and Distribute options. A pair of digital vernier calipers are a great aid to discovering the required sizes. Remember to delete these before G-Code encoding.
When generating the G-Code each shape will be seen as an individual object, so lets say you want to have two sides of your box cut from a single sheet of plywood, this would be two rectangles abutting each other with a side to be cut overlapping. As it takes four passes to cut each shape 1mm at a time, this means it’ll take six passes down the centre. To fix this select both rectangles and then Path > Combine followed by Path > Difference to make a single object.
Outputting to G-Code
Now that you have completed your drawing, save your work then convert your objects to paths by selecting all objects then Path > Object to Path. You may also want to place your drawing near the bottom left of the document, as this is where the CNC router starts. Now using Gcodetools there are three things you need to do to produce the G-Code file. None of the Gcodetools windows close automatically when apply is clicked, you will need to do that yourself. From the Inkscape menu:
1. Extensions > Gcodetools > Tools Libary…
Select Tools Type: cylinder and click apply In the overlarge green box that appears you will need to set the tool diameter and feed speed.
This can be a bit fiddly as the text can become detached from the box and the settings lost, what seems to work most reliably for me is to change to Text Objects (F8) click on the numbers you want to change and once done go back to Select and Transform (F1). Resize the box afterwards to check that it is still working – if the green box moves but the text does not then Ctrl-Z a few times and try again.
tool bit diameter in mm
speed while cutting through the material in mm/second
Plunge speed in the material in mm/second
Depth of cut on each pass in mm
2. Extensions > Gcodetools > Orientation Points
This tells the g-code where to start, normally bottom-left on the CNC Set the following:
– Orientation type: 2-points mode
– Z Surface: 0mm – this is the top of your surface
– Z Depth: -3.4mm – this is the thickness of material to cut, a negative number
3. Extensions > Gcodetools > Path to Gcode
This creates the G-code file, in the Preferences Tab set the following:
– File: output filename
– Directory: output directory
– Z safe height: 5mm – height above the work surface when moving between cuts
The filename once set doesn’t change, an incremental number is appended to the output filename. Click the Path to Gcode Tab before clicking apply (this appears to be a bug).
Your image will be updated to show the g-code routing, give this a visual check to ensure that all objects have been coded and that it looks right, the path to be taken should be in colour and contain arrows showing the direction of the router.
If there are too many arrows or if a line has arrows pointing in different directions then there may be an object underneath, check on your original artwork, in the image with the three circles below you see that A has not been converted to a path with Path > Object to Path, B has a duplicate object underneath and C is correct.
The generated G-Code does not appear to include the Spindle Motor Start command – So remember to start the spindle manually in your CNC software before running the G-Code – its interesting how easily these bits break with a sideways load. Remember if you are cutting trough rather than engraving, don’t forget to put a layer to sacrifice between whatever you are making and the CNC’s bed, I use 5mm MDF/Fibreboard.
FFmpeg is a command line program to manipulate, convert, record and stream video and audio, it is available for Mac, Linux and Windows. Here is a handy list of commands for reference, these have been tested with version 3.1.12 in a Debian Linux environment. I expect this list to grow over time as needs arise.
Using this codec reduces the time it takes for the video to be available after upload, however YouTube converts the file again to the VP9 codec and unless you have a popular channel, 100 subscribers or more, then this can take a few days or weeks and in the meantime your video can appear quite poor and blocky even when watching at 1080p, especially when there is a lot of movement like in a car dash-cam video. You can use FFmpeg to encode to VP9 webm format with this bash script:
This script is based on the encoding method shown in the WebM Wiki on my computer it is very slow and takes a quite a few hours to encode just nine minutes of video and the eventual results are so poor you’ll be wondering why you bothered.
• Convert to MP4 for use in Vegas Studio:
ffmpeg-iinputFile.mkv-codec copy outputFile.mp4
If you have a particularly old/odd video and get lots of pts has no value errors, then try this:
The -fflags +genpts option adds a Presentation Timestamp (PTS) to the frames, this must be before the -i as shown to work. Source.
• Set the video playback speed, this method adjusts the Presentation Timestamp (PTS) on each frame which may not work with older software. To slow down video divide the PTS by your required speed, this example slows the action by two times setpts=PTS/2.0. You can also reduce the number of dropped frames by increasing the frame-rate -r 50, in this case I went from 25fps to 50fps, but depending in the chosen speed frames may still be dropped.
Here is a small Bash script that converts any supported ffmpeg video format; such as .MKV, .MP4 or .MOV and extracts the audio to an .MP3 file, It will also split that MP3 file into chunks and put them in a convenient directory. You will need to install ffmpeg and mp3splt for your particular platform.
./mkv2mp3"big fat file.mkv"
This uses ffmpeg to convert “big fat file.mkv” to “big fat file.mp3” and then uses mp3splt to create a directory “big fat file” containing the files 01 – big fat file.mp3, 02 – big fat file.mp3, etc. The MP3 files will be encoded at 128k Constant Bit Rate and each file will be around 50 minutes in length. To install in Debian/Ubuntu use: sudo apt-get install ffmpeg mp3splt
Taking this further, I was thinking that it would be nice to have these converted into the M4B Audiobook format for use on my elderly iPod. The script below assumes that you have processed the files as above and have added metadata tags using a tool like mp3tag (yes I know this is for Windows).
To complete this we need to: Combine the multiple MP3 files into one big file, or read the original big file then convert that to M4B format at 96K bit and add chapter marks every ten minutes. For this I have used ffmpeg v3.2.12 and libmp4v2 (for the mp4chaps utility), to install in Debian/Ubuntu use: sudo apt-get install libmp4v2-dev mp4v2-utils ffmpeg
This script works best from a single MP3 file rather than from those that have been re-combined back into a single file, recombining the files caused ffmpeg to exclaim “invalid packet size” and “invalid data” errors. It is able to tell the difference between a directory and a single MP3 and processes the file accordingly, don’t forget to add metadata tags and cover art before you run the script.
## get the tag text from the metadata
## if the line contains an equals (=) then split by the first equals.
## check if input is a diretory
# get the name of the first MP3 file in the directory
Bluetooth Low Energy – BLE – Bluetooth 4.0 is an industry-standard wireless protocol built for the Internet of Things – IoT, it is designed to provide connectivity for devices operating from low capacity power sources such as coin cell batteries.
In this introduction to BLE I’ll be configuring a Raspberry Pi2 computer to talk to a smart watch. We will be installing the latest version of BlueZ from source, enabling BLE support. This is not a tutorial on decoding the data from the watch I am just using it as an example, although I may write about decoding it in a future posting.
I am using a ASUS USB-BT400 Bluetooth 4.0 Dongle on a Raspberry Pi2 but this will work on any computer with a Debian based distribution. Your dongle must be BLE/Bluetooth 4.0 capable otherwise this won’t work. I am using an ID107HR activity tracker with pedometer and heart rate monitor, randomly chosen from the list of cheap ones available on Amazon. While using the Pi to talk to the the watch make sure Bluetooth on the phone is off as it can only connect to one device at a time.
The current distribution of Raspbian – jessie on the Raspberry Pi comes with version 5.23 of the BlueZ Bluetooth stack that’s rather old, dating from September 2014 which lacks many of the features we will be needing. The current version 5.44 of the BlueZ has many changes in the package with many familiar components such as hcitool and gatttool being depreciated, so I will be ignoring those and using the available commands, bluetoothctl, on the terminal.
With Raspbian – jessie installed we will need to update the Pi make sure some packages are installed and then installing the latest version of BlueZ. But first, remove the installed version 5.23 of BlueZ:
removing the installed bluez
$sudo apt-get--purge remove bluez
$sudo apt-get autoremove
Next, perform the traditional housekeeping updates then install the build tools and USB libraries. Those parts that are installed already will be automatically skipped.
Inside the BlueZ directory, configure, make (this takes a while), and install. The experimental option adds BLE support and enabling the library allows for python use later on:
$sudo make install
Configuring and Starting BlueZ
At this stage we will need to check that the installation worked and that we can see your bluetooth dongle. With your bluetooth dongle in a USB port you should see it on your list of USB devices, here you see mine as device ID: 0b05:17cb ASUSTek Computer, Inc.:
Bus001Device003:ID0424:ec00 Standard Microsystems Corp.SMSC9512/9514Fast Ethernet Adapter
bluetoothctl remembers your devices, so when you next use the program the watch appears on the list at the start. The controller has a number of options, these can be seen with help command. You can use show to view the status of your dongle:
UUID:A/VRemote Control Target(0000110c-0000-1000-8000-00805f9b34fb)
The list of UUID’s show the services supported by the Dongle. Now we can power the dongle on, set the agent – this manages the connection, and then connect to the watch on which the bluetooth symbol will appear. Once connected there will be a pause then you will see a list of attributes supported by the watch, it is advertising the services available:
These UUID’s are used to describe the sevices available on the device, some are pre-defined and can be found in the a href=”https://www.bluetooth.com/specifications/gatt/characteristics” target=”_blank” rel=”noopener noreferrer”>GATT schema, others are vendor specific and unless they publicly release these, decoding can become rather difficult. There are four types of attribute:
Services – collections of characteristics and relationships to other services that encapsulate the behavior of part of a device
Characteristics – attribute types that contain a single logical value
Descriptors – defined attributes that describe a characteristic value
Declarations – defined GATT profile attribute types
Each attribute is identified by a 128 bit ID, for example, one of the characteristics from the list above: 00002902-0000-1000-8000-00805f9b34fb, the first eight bits are used as an unique identifier: 00002902 and are shown as UUID’s: 0x2902. Data is contained in services, each service has a number of characteristics that may contain further descriptions depending on the requirement of the characteristic. You can see how the data is mapped out in this chart:
A spreadsheet with the watch data reformatted and tastefully coloured to illustrates this. Observe the Service URL column, it looks a lot like a directory structure:
Here we see two services /service0008 and /service000c looking further into the second service: /service000c we see that it has four characteristics, and to of those have descriptors. We can interrogate the characteristics and descriptors to glean further information by selecting the attribute and reading, like so:
Which is all very nice, but not particularly helpful as the manufacturer has chosen to use custom, proprietary, UUID’s for the watch. We don’t know the instructions to send to have the watch realease its data.
Those Scripting BlueZ
Inevitably, you’ll be wanting to automate connections. This becomes easy with the automation scripting language expect. Install, then make a script file:
$sudo apt-get install expect
In this example the script forgets the watch, finds the watch, connects to the watch, gets some info and then disconnects:
## execute blutetoothctl
spawn sudo bluetoothctl
## forget about the device - if connected previously
## switch on the dongle
expect"Changing power on succeeded"
## scan for devices
## set the agent
## connect to watch
## get some info
in the script, send sends a command, don’t forget to add the carriage return – \r and expect is used to wait for a response within the timeout period, here it is set to 10 seconds. expect -re is using regex when looking for a reply, otherwise it uses a literal string. So much more can be done with expect and there are many tutorials, such as this one written by FluidBank.
More Bluetooth Data
For analysing bluetooth data a couple of very useful tools are available, Wireshark and Android data logging. I will go through the installation but not look at the data in any detail, this posting is getting a bit long. This Section is in two parts, installing Wireshark and Android Debug Bridge.
Sniffing with the Shark
Wireshark is a network and bluetooth packet sniffer, it shows you network and bluetooth traffic occurring on your Pi. Here is a quick installation method for a reasonably new version of Wireshark (v2.2.4) from the backports, answer yes to the question “Should non-superusers be able to capture packets?”:
and if you get a message about permissions, reconfigure the package and answer yes:
$sudo dpkg-reconfigure wireshark-common
Start Wireshark and double click your bluetooth device on the list, in my case bluetooth0. There is not much to see as Wireshark will only see traffic between the watch and the Pi:
Android Debug Bridge – ADB
For Anroid 4.2.2 and above, activate developer mode on the phone, go to Settings, tap About Phone and at the bottom of the list tap Build Number three times. Back in the main settings page Developer Options has appeared, tap developer and turn USB debugging On. With the phone plugged into a USB port a little Android head should appear in the information bar at the top-left of the screen. To begin we will need to install some udev rules written by Nicolas Bernaerts:
At this point on the phone an allow USB debugging dialog will appear, give permission and always trust to authorise it. ADB will now show the device as a device:
android tools install
List of devices attached
If the device list is empty, with everything plugged in good and proper and the phone setup in developer mode, start your diagnosis by checking udev; open another terminal window and view logging with udevadm monitor –environment and reload with sudo udevadm control –reload I’m not entirely sure what I did to get it from ‘not working’ to ‘working’. If all else fails elevate yourself to root.
With ADB now setup we can capture the Bluetooth data being exchanged. With bluetooth off, in the Developer Settings find Enable Bluetooth HCI snoop log and turn it On. In the smartwatch app synchronise with your watch, once complete turn Bluetooth off manually – this is to minimise the amount of captured data. Don’t forget to turn logging off on the phone when done. To find where the log file has been stored and copy the file from the phone to the Pi use:
This wasn’t quite the posting I originally had in mind, I wanted to decode the data from the watch for my own use, making something more useful, impressive graphs and charts, than that provided by the Android App VeryFit 2.0 but as the manufacturer has chosen to use proprietary GATT codes it makes the job that much harder. It may be much simpler to just buy an expensive FitBit and download the data from them. But with writing this I now know a few things that were previously unknown, and I hope that this has provided some light to your BlueZ (a pun!, right at the end!).
On the Apple Mac if you use the Teensy micro-controller with the Arduino IDE you may have come across a persistent firewall error message when starting the IDE, I have seen this error for quite a while over a range of system and software upgrades. I have applied this fix to:
OS X 10.10 Yosemite and above / macOS 10.12 Sierra
Arduino IDE 1.6.13 – all versions, at least 1.5 and above.
The Arduino IDE is installed in the default applications folder, as is the Teensyduino. Some knowledge of using the terminal is required.
On your Apple Mac, you installed the Teensyduino software for the Teensy and now when you start the Arduino IDE this error message appears:
Do you want the application “Arduino.app” to accept incoming network connections?
Clicking Deny may limit the application’s behaviour. This setting can be changed in the Firewall pane of Security & Privacy preferences.
When the Arduino IDE is installed it includes a certificate to assure the system that everything is correct, the Teensyduino installation makes changes to the IDE configuration and this causes a mismatch and the signature in the certificate does not match the installation.
You can verify the failed certificate in the terminal with the spctl command:
This is a follow up to one of my previous postings: Python and the Oracle Client. The main databases here are being upgraded to Oracle 12 and I’ve taken the opportunity to update the client used by my Python scripts, also its good practice to install new clients when old versions go out of support.
The system I am upgrading here has the following configuration, but this should work with any RPM based distribution, such as CentOS and SUSE :
Red Hat Enterprise Linux Server release 6.6 (Santiago)
To find the versions of your currently installed software: $ python
Python 2.6.6 (r266:84292, Nov 21 2013, 10:50:32)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import cx_Oracle
>>> print cx_Oracle.version
and the Oracle client: $ rpm -qa | grep oracle
If you have the old versions installed you will need to do some tidying up by removing the client and python connector, version 11 of the client despite being RPM packaged had some non-standard elements. Use rpm to delete the old version of instant client, remove devel first: $ sudo su
# rpm -ev oracle-instantclient11.2-devel-220.127.116.11.0-1.x86_64
# rpm -ev oracle-instantclient11.2-basic-18.104.22.168.0-1.x86_64
you may also need to remove the library reference from a previous installation: # rm /etc/ld.so.conf.d/oracle.conf
to remove the Python oracle connector, there are two methods. Manually, by finding the previously installed package deleting the files and editing the package list: # find / -name cx_Oracle.py -print
# cd /usr/lib/python2.6/site-packages
# rm -rf cx_Oracle-5.1.2-py2.6-linux-x86_64.egg
now edit the easy-install.pth file # nano /usr/lib/python2.6/site-packages/easy-install.pth
and remove the line: ./cx_Oracle-5.1.2-py2.6-linux-x86_64.egg
Or do it the easy way, if you have pip installed: # sudo pip uninstall cx_Oracle
easy_install does not have an uninstall option.
Download and install version 12 of the Instant Client and SDK (devel), these can be gotten from: http://www.oracle.com/technetwork/database/features/instant-client/index-097480.html For Linux choose the correct flavour for your installed operating system: x86 or x86-64 for 64bit operating systems, you will need to register on the site gain access the files. # rpm -i oracle-instantclient12.1-basic-22.214.171.124.0-1.x86_64.rpm
# rpm -i oracle-instantclient12.1-devel-126.96.36.199.0-1.x86_64.rpm
Now to install the python connector: # easy_install cx-Oracle
or, the recommended method: # pip install cx-Oracle
Installation for the version 12 client is much more straight forward than that for version 11.
A quick test to ensure that the expected versions appear, and that you can connect to the database. Python 2.6.6 (r266:84292, Nov 21 2013, 10:50:32)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import cx_Oracle
>>> oraConn = "<USERNAME>/<PASSWORD>@<DATABASE HOST>:<DATABASE PORT>/<SERVICE>"
>>> ocDB = cx_Oracle.connect(oraConn)
Two drops of water colliding, frozen in time using the power of high speed flash photography produce an infinite variety of shapes. While this can be done with a pipette, a camera, a single flash gun, practice and good hand/eye co-ordination. I have an Arduino Uno and I am going to use it. What is happening in this picture? Two carefully timed water droplets have been released from above and are plummeting towards a bowl of water. The first drop has hit the water and is rebounding, just as the up-spout reaches its zenith, the second drop collides with the top resulting in a mushroom shaped splat, with the event captured in the camera with a frame of 1/10,000th of a second. In this post I’ll be sharing my experiences in creating these water drop images, I’ll be looking at the photography equipment, electronics, and technique.
Camera: This can be any DSLR or advanced compact, it must have Bulb mode, and be triggerable by an electronic wired connection, some have an IR remote but I found this to be difficult to setup. Set the ISO to be around 200.
Lens: I use a 100mm Macro, with focus set to manual and image stabilisation off. The aperture is set high, at least f22 to give a suitable depth of field and improve image sharpness.
Tripod: A good solid one with easy to adjust ball head.
Flash: I use up to five flash guns for my photos, two for back light, one to give an under-light through the glass bowl, another for front light and finally one handheld. Rechargeable batteries for the flashes are recommended, I use 2400mAh NiMh Duracells.
The flash guns need to be in manual mode at their lowest power setting, this is to give the shortest duration of flash for the sharpest results. As you increase the flashes power the duration of the light emitted gets longer, causing burred images. On my Canon flash I set it to 1/128 second and on the Nissin Di622 set the EV to -1.5.
For connecting the Arduino to the flashes I use a 2.4GHz wireless remote trigger, with four receivers and a modified hotshoe mount attached to the transmitter Look for the Yongnuo RF-602 Remote Flash trigger on ebay, (not to be confused with the remote shutter release). Most modern TTL flash guns appear to be missing the wired remote trigger connection that you can just plug into.
The flashes also have a built in slave trigger, where it sees that one flash has gone off so it set itself off too. On the Canon flashes this appears to only work in ETTL mode and can’t be used for this, but the Nissins work well.
The frame is bits of wood held together with glue and stands about 75cm high this is to allow the water to accelerate and produce decent sized splashes. At the base is an extra large seed tray, the type without holes, to contain any spillages. This normally has a glass bowl full of water acting as the drip splash event zone. Halfway up the fame is mounted the laser and detector and at the top a reservoir of water and solenoid valve.
The reservoir is a one litre plastic storage tub from Poundland with a hole drilled in the base and a short length of 8mm PVC tubing hot glued into place. The tubing can be difficult to glue as its rather flexible, pushing down a short section of solid tube made from the outer of a disposable biro fixes that. This pipe is connected to the solenoid, observing the correct direction of flow marked on the valve.
The reservoir has a Mariotte Syphon fitted to the lid, this is to provide a constant and stable water pressure to the valve, the pipe from the lid ends about 2cm short of the reservoir base.
The Arduino and control electronics are all set to produce this photo taking sequence:
press ‘play’ button on remote control
lights out – dark room
open shutter on camera
solenoid releases two drops of water
drips pass through laser detector – timer started
drops arrive and do their thing
flash guns triggered by timer – picture taken
shutter closed on camera
The electronic circuit can be broken down into these five blocks; lights, laser, IR receiver, solenoid control, flash control, camera control, each diagram shows the label name for the pin used rather than a pin number. The diagrams can be enlarged by clicking on them.
IR Receiver: The IR receiver allows use of an old TV remote control. My original design was to have a rotary dial and a small OLED display, but this simplified everything considerably. If you don’t have a spare remote one can be gotten from Poundland. I have the Arduino send any text output to a laptop on the USB port.
Laser: Warning: keep away from eyes, permanent damage can occur with exposure to any laser. The laser is used with a photo-transistor to detect drips of water as they plummet to their splash event. I used a small 3 Volt 5mW red laser with a built-in lens, I have added a resistor and diode in series to prevent over voltage as they’re a bit delicate. Although a modified laser pointer will do just as well. The TEPT4400 phototransistor is a type rated for visible light and has higher sensitivity to change than a photoresistor.
Lights: Warning: Mains Electricity Can Kill, this is to be avoided. If you are uncertain about this part then don’t do it. I rapidly found that working in darkness between shots just made life difficult, and finding the light switch became a hassle. To fix that I got a pre-made 5v Relay circuit and wired this up to a table lamp to provide some illumination. Using a standard wall socket and backbox connect live through the normally open side of the relay, and the neutral and earth to the socket.
Remember to keep the electricity away from fingers (and any other body parts) and water.
Solenoid Control: I use a 12v solenoid, (search for “12v solenoid valve water arduino” on ebay, a couple of sellers have suitable models with connectors included). I use a mosfet transistor to switch the power, this has been detailed in one of my previous blog postings.
Flash and Camera Control: The electronics for the camera and flash are closely related. Both use the ILD74 optocoupler to electrically isolate the camera and flash equipment from the Arduino. Although the camera focus connection is not used here I have included it as it may be useful later on.
The Canon camera has two different types of wired connection on the shutter release depending on the model of camera, a standard three pin 2.5mm jack or a N-3 connector (search for “canon N3 connecting cable” on ebay). A list of connectors for other makes of cameras can be found here.
Sound: Although not used here, this setup works well with a piezo microphone for use with popping water balloons and the like, use a buzzer that is enclosed in a plastic housing with a hole on top and buzzes when DC power is applied. Connect the output to an analogue pin on the Arduino, your software can use a very similar method to that for the laser.
Setup and Use
Have plenty of dish cloths or towels to hand, this can get a bit moist. Keep an eye on your camera equipment making sure it doesn’t get wet.
For setting up a shot I use a steel ruler with a magnet stuck to it I set the water dripping to make sure it lands where I want on the magnet, the camera is then focused on the magnet, take away the ruler and you have your properly focused splash event.
Add colour with food dies, adding these to the reservoir seems to work best and keeping the water in the splashdown area clean. Guar Gum thickens the water and makes larger drops and bigger splashes, you only need to add a small amount, about a teaspoon per litre and you’ll need to sieve out any lumps before use. Fluorescein is quite entertaining when used with a UV lamp, adding a green glow to your splashes. Adding diluted water based paints to the reservoir can add a lot of colour, but has a tendency to block the solenoid.
Sparkly backdrops can be gotten from the craft section in stationers. An A4 sized (21cm x 30cm) sheet is normally enough. Try bouncing the flash off the backdrop.
It is all about experimentation, expect to take lots of photos, many of which will be poor. Make notes of timings when you get good results, when you get a good shot, very small changes in timings can produce fairly dramatic effects.
Here is an Arduino sketch, press Play on the remote to start a two drip sequence. adjust the amount of time in milliseconds between the laser detect and flash – flashWait with Volume +/-: +10,-10, Channel +/-: +5,-5, Fast Fwd/Rev: +2,-2, and the time betweenDrips with 4 (+1) and 7 (-1).
// digital IO
#define IR_RX 12
#define CAMERA_FLASH 8
#define CAMERA_SHOOT 7
#define MAINS_SW 2
#define SOLENOID 4
#define LASER 6
// analog inputs
#define PHOTO 1
bytethreshold=0;// current state of the phototransistor
bytethresholdTol=2;// tolerance of the threshold
// default timings
unsignedintflashWait=340;// time in millis between the laser being triggered and the flash being fired
unsignedintsolenoidWait=6;// time in millis the solenoid is triggerd
unsignedintbetweenDrips=120;// time in millis between drops of water
// opens and closes the shutter
// flash the flases, close the shutter, turn lights back on.
shutter(0);// close shutter
// calibrate the laser sensor - set the threshold
// set the threshold
Serial.print("spot reading ");
// things to do if timeout reached
closeShutter();// close shutter, turn lights on
// shutterOpen = false;
// play on remote
openShutter();// turn lights off, open shutter
//make drops of water
Serial.print("Solenoid trigger - multi drip: ");
// wait for laser to detect drop, with a four second timeout