GRIP generated code to track the 2016 FRC tower on a Raspberry Pi using an ip camera
##Set up the raspberry pi
There are two options
- Follow [this] (http://www.pyimagesearch.com/2016/04/18/install-guide-raspberry-pi-3-raspbian-jessie-opencv-3/) tutorial
-
Skip everything inside step 4 related to virtual environments
-
Change openCV version from 3.1.0 to whatever the latest version is (Currently using 3.2.0)
-
Install pynetworktables
$ pip3 install pynetworktables
- Download and flash the image on GitHub
##Generating and editing your code
- Write your GRIP pipeline to process the image
- Export the code in Python
tools -> Generate Code
- Edit the exported code
-
Change class name to "GripPipeline"
class GripPipeline:
-
Add a second parameter "source0" to the process function
def process(self, source0):
-
Remove "self.__" from input of first step in the pipeline in process function (in this case resize)
#Step Resize_Image0: self.__resize_image_input = source0
##Modifying sample_vision.py
- Make sure the import statement is correct for your generated pipeline
from "PIPELINE_FILE_NAME" import "CLASS_NAME"
-
Change NetworkTables IP address to that of the roboRIO
NetworkTable.setIPAddress("10.TE.AM.XX")
-
Pick video input method
-
If using an IP camera change address of the camera
cap = cv2.VideoCapture("http://10.TE.AM.3/mjpg/video.mjpg")
##References
- https://github.com/WPIRoboticsProjects/GRIP-code-generation/tree/master/python
- https://github.com/Frc2481/paul-bunyan/blob/master/Camera/main.py
- https://github.com/robotpy/pynetworktables
- https://github.com/WPIRoboticsProjects/GRIP/wiki/Running-GRIP-on-a-Raspberry-Pi-2