Bases: SimpleCV.Camera.FrameSource
The Camera class is the class for managing input from a basic camera. Note that once the camera is initialized, it will be locked from being used by other processes. You can check manually if you have compatable devices on linux by looking for /dev/video* devices.
This class wrappers OpenCV’s cvCapture class and associated methods. Read up on OpenCV’s CaptureFromCAM method for more details if you need finer control than just basic frame retrieval
Retrieve an Image-object from the camera. If you experience problems with stale frames from the camera’s hardware buffer, increase the flushcache number to dequeue multiple frames before retrieval
We’re working on how to solve this problem.
Bases: threading.Thread
This is a helper thread which continually debuffers the camera frames. If you don’t do this, cameras may constantly give you a frame behind, which causes problems at low sample rates. This makes sure the frames returned by your camera are fresh.
An abstract Camera-type class, for handling multiple types of video input. Any sources of images inheirit from it
Camera calibration will help remove distortion and fisheye effects It is agnostic of the imagery source, and can be used with any camera
imageList is a list of images of color calibration images.
grid_sz - is the actual grid size of the calibration grid, the unit used will be the calibration unit value (i.e. if in doubt use meters, or U.S. standard)
dimensions - is the the count of the interior corners in the calibration grid. So for a grid where there are 4x4 black grid squares has seven interior corners.
The easiest way to run calibration is to run the calibrate.py file under the tools directory for SimpleCV. This will walk you through the calibration process.
Load a calibration matrix from file. The filename should be the stem of the calibration files names. e.g. If the calibration files are MyWebcamIntrinsic.xml and MyWebcamDistortion.xml then load the calibration file “MyWebcam”
Returns true if the file was successful loaded, false otherwise.
If given an image, apply the undistortion given my the camera’s matrix and return the result
If given a 1xN 2D cvmat or a 2xN numpy array, it will un-distort points of measurement and return them in the original coordinate system.
Bases: SimpleCV.Camera.FrameSource
The JpegStreamCamera takes a URL of a JPEG stream and treats it like a camera. The current frame can always be accessed with getImage()
Requires the [Python Imaging Library](http://www.pythonware.com/library/pil/handbook/index.htm)
Bases: threading.Thread
Threaded class for pulling down JPEG streams and breaking up the images
Bases: SimpleCV.Camera.FrameSource
This is an experimental wrapper for the Freenect python libraries you can getImage() and getDepth() for separate channel images
Bases: SimpleCV.Camera.FrameSource
The virtual camera lets you test algorithms or functions by providing a Camera object which is not a physically connected device.
Currently, VirtualCamera supports “image” and “video” source types.
Color is a class that stores commonly used colors in a simple and easy to remember format, instead of requiring you to remember a colors specific RGB value.
To use the color in your code you type: Color.RED
To use Red, for instance if you want to do a line.draw(Color.RED)
ColorCurve is a color spline class for performing color correction. It can takeas parameters a SciPy Univariate spline, or an array with at least 4 point pairs. Either of these must map in a 255x255 space. The curve can then be used in the applyRGBCurve, applyHSVCurve, and applyInstensityCurve functions:
clr = ColorCurve([[0,0], [100, 120], [180, 230], [255, 255]])
image.applyIntensityCurve(clr)
the only property, mCurve is a linear array with 256 elements from 0 to 255
A color map takes a start and end point in 3D space and lets you map a range of values to it. Using the colormap like an array gives you the mapped color.
This is useful for color coding elements by an attribute:
blobs = image.findBlobs()
cm = ColorMap(startcolor = Color.RED, endcolor = Color.Blue,
startmap = min(blobs.area()) , endmap = max(blobs.area())
for b in blobs:
b.draw(cm[b.area()])
The color model is used to model the color of foreground and background objects by using a a training set of images.
You can create the color model with any number of “training” images, or add images to the model with add() and remove(). Then for your data images, you can useThresholdImage() to return a segmented picture.
WindowStream opens a window (Pygame Display Surface) to which you can write images. The default resolution is 640, 480 – but you can also specify 0,0 which will maximize the display. Flags are pygame constants, including:
By default display will attempt to scale the input image to fit neatly on the screen with minimal distorition. This means that if the aspect ratio matches the screen it will scale cleanly. If your image does not match the screen aspect ratio we will scale it to fit nicely while maintining its natural aspect ratio.
Because SimpleCV performs this scaling there are two sets of input mouse coordinates, the (mousex,mousey) which scale to the image, and (mouseRawX, mouseRawY) which do are the actual screen coordinates.
pygame.FULLSCREEN create a fullscreen display pygame.DOUBLEBUF recommended for HWSURFACE or OPENGL pygame.HWSURFACE hardware accelerated, only in FULLSCREEN pygame.OPENGL create an opengl renderable display pygame.RESIZABLE display window should be sizeable pygame.NOFRAME display window will have no border or controls
Display should be used in a while loop with the isDone() method, which checks events and sets the following internal state controls:
mouseX - the x position of the mouse cursor on the input image mouseY - the y position of the mouse curson on the input image mouseRawX - The position of the mouse on the screen mouseRawY - The position of the mouse on the screen
NOTE!!!!!!!!!!!!!!!! The mouse position on the screen is not the mouse position on the image. If you are trying to draw on the image or take in coordinates use mousex and mousey as these values are scaled along with the image.
mouseLeft - the state of the left button mouseRight - the state of the right button mouseMiddle - the state of the middle button mouseWheelUp - if the wheel has been clicked towards the top of the mouse mouseWheelDown - if the wheel has been clicked towards the bottom of the mouse Example: >>> display = Display(resolution = (800, 600)) #create a new display to draw images on >>> cam = Camera() #initialize the camera >>> done = False # setup boolean to stop the program
# Loop until not needed while not display.isDone():
cam.getImage().flipHorizontal().save(display) # get image, flip it so it looks mirrored, save to display time.sleep(0.01) # Let the program sleep for 1 millisecond so the computer can do other things if display.mouseLeft:
display.done = True
writeFrame copies the given Image object to the display, you can also use Image.save()
Write frame trys to fit the image to the display with the minimum ammount of distortion possible. When fit=True write frame will decide how to scale the image such that the aspect ratio is maintained and the smallest amount of distorition possible is completed. This means the axis that has the minimum scaling needed will be shrunk or enlarged to match the display.
When fit=False write frame will crop and center the image as best it can. If the image is too big it is cropped and centered. If it is too small it is centered. If it is too big along one axis that axis is cropped and the other axis is centered if necessary.
DrawingLayer gives you a way to mark up Image classes without changing the image data itself. This class wraps pygame’s Surface class and provides basic drawing and text rendering functions
Example: image = Image(“/path/to/image.png”) image2 = Image(“/path/to/image2.png”) image.dl().blit(image2) #write image 2 on top of image
Draw a bezier curve based on a control point and the a number of stapes
Blit one image onto the drawing layer at upper left coordinates
Draw a rectangle given the center (x,y) of the rectangle and dimensions (width, height)
- color - The object’s color as a simple CVColor object, if no value is sepcified
- the default is used.
- alpha - The alpha blending for the object. If this value is -1 then the
- layer default value is used. A value of 255 means opaque, while 0 means transparent.
w - The line width in pixels. This does not work if antialiasing is enabled.
filled -The rectangle is filled in
Draw a circle given a location and a radius.
width - The line width in pixels. This does not work if antialiasing is enabled.
filled -The object is filled in
Draw an ellipse given a location and a dimensions.
width - The line width in pixels. This does not work if antialiasing is enabled.
filled -The object is filled in
ezViewText works just like text but it sets both the foreground and background color and overwrites the image pixels. Use this method to make easily viewable text on a dynamic video stream.
fgcolor - The color of the text.
bgcolor - The background color for the text are.
Draw a single line from the (x,y) tuple start to the (x,y) tuple stop. Optional parameters:
width - The line width in pixels.
antialias - Draw an antialiased object of width one.
Draw a set of lines from the list of (x,y) tuples points. Lines are draw between each successive pair of points.
Optional parameters:
width - The line width in pixels.
antialias - Draw an antialiased object of width one.
Draw a polygon from a list of (x,y)
width - The width in pixels. This does not work if antialiasing is enabled.
filled -The object is filled in
antialias - Draw the edges of the object antialiased. Note this does not work when the object is filled.
Draw a rectangle given the topLeft the (x,y) coordinate of the top left corner and dimensions (w,h) tge width and height
w - The line width in pixels. This does not work if antialiasing is enabled.
filled -The rectangle is filled in
Draw a rectangle given two (x,y) points
w - The line width in pixels. This does not work if antialiasing is enabled.
filled -The rectangle is filled in
Add this layer to another layer.
Blit this layer to another surface.
This method allows you to set the surface manually.
This method sets the default rendering color.
This method sets the font size roughly in points. A size of 10 is almost too small to read. A size of 20 is roughly 10 pixels high and a good choice.
sprite draws a sprite (a second small image) onto the current layer. The sprite can be loaded directly from a supported image file like a gif, jpg, bmp, or png, or loaded as a surface or SCV image.
pos - the (x,y) position of the upper left hand corner of the sprite
scale - a scale multiplier as a float value. E.g. 1.1 makes the sprite 10% bigger
rot = a rotation angle in degrees
alpha = an alpha value 255=opaque 0=transparent.
Write the a text string at a given location
text - A text string to print.
location-The location to place the top right corner of the text
The Font class allows you to create a font object to be used in drawing or writing to images. There are some defaults available, to see them, just type Font.printFonts()
Get the font from the object to be used in drawing
Returns: PIL Image Font
Gets the size of the current font
Returns: Integer
The Image class is the heart of SimpleCV and allows you to convert to and from a number of source types with ease. It also has intelligent buffer management, so that modified copies of the Image required for algorithms such as edge detection, etc can be cached and reused when appropriate.
Image are converted into 8-bit, 3-channel images in RGB colorspace. It will automatically handle conversion from other representations into this standard format. If dimensions are passed, an empty image is created.
Examples: >>> i = Image(“/path/to/image.png”) >>> i = Camera().getImage()
You can also just load the SimpleCV logo using: >>> img = Image(“simplecv”) >>> img = Image(“logo”) >>> img = Image(“logo_inverted”) >>> img = Image(“logo_transparent”) >>> img = Image(“barcode”)
Or you can load an image from a URL: >>> img = Image(“http://www.simplecv.org/image.png“)
Adapative Scale is used in the Display to automatically adjust image size to match the display size.
This is typically used in this instance: >>> d = Display((800,600)) >>> i = Image((640, 480)) >>> i.save(d)
Where this would scale the image to match the display size of 800x600
Push a new drawing layer onto the back of the layer stack
Apply 3 ColorCurve corrections applied in HSL space Parameters are: * Hue ColorCurve * Lightness (brightness/value) ColorCurve * Saturation ColorCurve
Returns: IMAGE
Intensity applied to all three color channels
Render all of the layers onto the current image and return the result. Indicies can be a list of integers specifying the layers to be used.
Apply 3 ColorCurve corrections applied in rgb channels Parameters are: * Red ColorCurve * Green ColorCurve * Blue ColorCurve
Returns: IMAGE
Do a binary threshold the image, changing all values above thresh to maxv and all below to black. If a color tuple is provided, each color channel is thresholded separately.
If threshold is -1 (default), an adaptive method (OTSU’s method) is used. If then a blocksize is specified, a moving average over each region of block*block pixels a threshold is applied where threshold = local_mean - p.
Take image and copy it into this image at the specified to image and return the result. If pos+img.sz exceeds the size of this image then img is cropped. Pos is the top left corner of the input image
Returns an image representing the distance of each pixel from a given color tuple, scaled between 0 (the given color) and 255. Pixels distant from the given tuple will appear as brighter and pixels closest to the target color will be darker.
By default this will give image intensity (distance from pure black)
Convolution performs a shape change on an image. It is similiar to something like a dilate. You pass it a kernel in the form of a list, np.array, or cvMat
example: >>> img = Image(“sampleimages/simplecv.png”) >>> kernel = [[1,0,0],[0,1,0],[0,0,1]] >>> conv = img.convolve()
Return a full copy of the Image’s bitmap. Note that this is different from using python’s implicit copy function in that only the bitmap itself is copied.
Returns: IMAGE
Crop attempts to use the x and y position variables and the w and h width and height variables to crop the image. When centered is false, x and y define the top and left of the cropped rectangle. When centered is true the function uses x and y as the centroid of the cropped region.
You can also pass a feature into crop and have it automatically return the cropped image within the bounding outside area of that feature
Apply a morphological dilation. An dilation has the effect of smoothing blobs while intensifying the amount of noise blobs. This implementation uses the default openCV 3X3 square kernel Erosion is effectively a local maxima detector, the kernel moves over the image and takes the maxima value inside the kernel.
iterations - this parameters is the number of times to apply/reapply the operation
See: http://en.wikipedia.org/wiki/Dilation_(morphology) See: http://opencv.willowgarage.com/documentation/cpp/image_filtering.html#cv-dilate Example Use: A part’s blob needs to be smoother Example Code: ./examples/MorphologyExample.py
Draw a circle on the Image, parameters include: * the center of the circle * the radius in pixels * a color tuple (default black) * the thickness of the circle
Note that this function is depricated, try to use DrawingLayer.circle() instead
Returns: NONE - Inline Operation
Draw a line on the Image, parameters include * pt1 - the first point for the line (tuple) * pt1 - the second point on the line (tuple) * a color tuple (default black) * thickness of the line
Note that this modifies the image in-place and clears all buffers.
Returns: NONE - Inline Operation
This function draws the string that is passed on the screen at the specified coordinates
The Default Color is blue but you can pass it various colors The text will default to the center of the screen if you don’t pass it a value
Finds an edge map Image using the Canny edge detection method. Edges will be brighter than the surrounding area.
The t1 parameter is roughly the “strength” of the edge required, and the value between t1 and t2 is used for edge linking. For more information:
<http://opencv.willowgarage.com/documentation/python/imgproc_feature_detection.html> <http://en.wikipedia.org/wiki/Canny_edge_detector>
Apply a morphological erosion. An erosion has the effect of removing small bits of noise and smothing blobs. This implementation uses the default openCV 3X3 square kernel Erosion is effectively a local minima detector, the kernel moves over the image and takes the minimum value inside the kernel. iterations - this parameters is the number of times to apply/reapply the operation See: http://en.wikipedia.org/wiki/Erosion_(morphology). See: http://opencv.willowgarage.com/documentation/cpp/image_filtering.html#cv-erode Example Use: A threshold/blob image has ‘salt and pepper’ noise. Example Code: ./examples/MorphologyExample.py
If you have the python-zxing library installed, you can find 2d and 1d barcodes in your image. These are returned as Barcode feature objects in a FeatureSet. The single parameter is the ZXing_path, if you don’t have the ZXING_LIBRARY env parameter set.
You can clone python-zxing at http://github.com/oostendo/python-zxing
Returns: BARCODE
This will look for continuous light regions and return them as Blob features in a FeatureSet. Parameters specify the binarize filter threshold value, and minimum and maximum size for blobs. If a threshold value is -1, it will use an adaptive threshold. See binarize() for more information about thresholding. The threshblocksize and threshconstant parameters are only used for adaptive threshold.
Note that this previously used cvblob and the python-cvblob library, which is no longer necessary
Returns: FEATURESET
Given an image, finds a chessboard within that image. Returns the Chessboard featureset. The Chessboard is typically used for calibration because of its evenly spaced corners.
The single parameter is the dimensions of the chessboard, typical one can be found in SimpleCV oolsCalibGrid.png
This will find corner Feature objects and return them as a FeatureSet strongest corners first. The parameters give the number of corners to look for, the minimum quality of the corner feature, and the minimum distance between corners.
Returns: FEATURESET
Standard Test: >>> img = Image(“sampleimages/simplecv.png”) >>> corners = img.findCorners() >>> if corners: True True
Validation Test: >>> img = Image(“sampleimages/black.png”) >>> corners = img.findCorners() >>> if not corners: True True
If you want to find Haar Features (useful for face detection among other purposes) this will return Haar feature objects in a FeatureSet. The parameters are: * the scaling factor for subsequent rounds of the haar cascade (default 1.2)7 * the minimum number of rectangles that makes up an object (default 2) * whether or not to use Canny pruning to reject areas with too many edges (default yes, set to 0 to disable)
For more information, consult the cv.HaarDetectObjects documentation
You will need to provide your own cascade file - these are usually found in /usr/local/share/opencv/haarcascades and specify a number of body parts.
Note that the cascade parameter can be either a filename, or a HaarCascade loaded with cv.Load().
Returns: FEATURESET
findLines will find line segments in your image and returns Line feature objects in a FeatureSet. The parameters are: * threshold, which determies the minimum “strength” of the line * min line length – how many pixels long the line must be to be returned * max line gap – how much gap is allowed between line segments to consider them the same line * cannyth1 and cannyth2 are thresholds used in the edge detection step, refer to _getEdgeMap() for details
For more information, consult the cv.HoughLines2 documentation
This function searches an image for a template image. The template image is a smaller image that is searched for in the bigger image. This is a basic pattern finder in an image. This uses the standard OpenCV template (pattern) matching and cannot handle scaling or rotation
Template matching returns a match score for every pixel in the image. Often pixels that are near to each other and a close match to the template are returned as a match. If the threshold is set too low expect to get a huge number of values. The threshold parameter is in terms of the number of standard deviations from the mean match value you are looking
For example, matches that are above three standard deviations will return 0.1% of the pixels. In a 800x600 image this means there will be 800*600*0.001 = 480 matches.
This method returns the locations of wherever it finds a match above a threshold. Because of how template matching works, very often multiple instances of the template overlap significantly. The best approach is to find the centroid of all of these values. We suggest using an iterative k-means approach to find the centroids.
Example: >>> image = Image(“/path/to/img.png”) >>> pattern_image = image.crop(100,100,100,100)
>>> found_patterns = image.findTemplate(pattern_image)
>>> found_patterns.draw()
>>> image.show()
Horizontally mirror an image Note that flip does not mean rotate 180 degrees! The two are different.
Returns: IMAGE
Vertically mirror an image Note that flip does not mean rotate 180 degrees! The two are different.
Returns: IMAGE
Returns the value matched in the color space class so for instance you would use if(image.getColorSpace() == ColorSpace.RGB)
RETURNS: Integer
Return a drawing layer based on the provided index. If not provided, will default to the top layer. If no layers exist, one will be created
This function returns the Gray value for a particular image pixel given a specific row and column.
This function returns a single row of RGB values from the image.
This function returns a single row of RGB values from the image.
Gets the pygame surface. This is used for rendering the display
RETURNS: pgsurface
This function returns the RGB value for a particular image pixel given a specific row and column.
This function returns a single column of RGB values from the image.
This function returns a single column of gray values from the image.
return a gray scale version of the image
Returns: IMAGE
Return a numpy array of the 1D histogram of intensity for pixels in the image Single parameter is how many “bins” to have.
Returns an image representing the distance of each pixel from the given hue of a specific color. The hue is “wrapped” at 180, so we have to take the shorter of the distances between them – this gives a hue distance of max 90, which we’ll scale into a 0-255 grayscale image.
The minsaturation and minvalue are optional parameters to weed out very weak hue signals in the picture, they will be pushed to max distance [255]
Returns the histogram of the hue channel for the image
Takes the histogram of hues, and returns the peak hue values, which can be useful for determining what the “main colors” in a picture now.
The bins parameter can be used to lump hues together, by default it is 179 (the full resolution in OpenCV’s HSV format)
Peak detection code taken from https://gist.github.com/1178136 Converted from/based on a MATLAB script at http://billauer.co.il/peakdet.html
Returns a list of tuples, each tuple contains the hue, and the fraction of the image that has it.
Insert a new layer into the layer stack at the specified index
Calculate the integral image and return it as a numpy array. The integral image gives the sum of all of the pixels above and to the right of a given pixel location. It is useful for computing Haar cascades. The return type is a numpy array the same size of the image. The integral image requires 32Bit values which are not easily supported by the SimpleCV Image class.
Invert (negative) the image note that this can also be done with the unary minus (-) operator.
Returns: IMAGE
The maximum value of my image, and the other image, in each channel If other is a number, returns the maximum of that and the number
Finds average color of all the pixels in the image.
Returns: IMAGE
Return all DrawingLayer objects as a single DrawingLayer
The minimum value of my image, and the other image, in each channel If other is a number, returns the minimum of that and the number
morphologyClose applies a morphological close operation which is effectively a dilation operation followed by a morphological erosion. This operation helps to ‘bring together’ or ‘close’ binary regions which are close together.
See: http://en.wikipedia.org/wiki/Closing_(morphology) See: http://opencv.willowgarage.com/documentation/cpp/image_filtering.html#cv-morphologyex Example Use: Use when a part, which should be one blob is really two blobs. Example Code: ./examples/MorphologyExample.py
The morphological gradient is the difference betwen the morphological dilation and the morphological gradient. This operation extracts the edges of a blobs in the image.
See: http://en.wikipedia.org/wiki/Morphological_Gradient See: http://opencv.willowgarage.com/documentation/cpp/image_filtering.html#cv-morphologyex Example Use: Use when you have blobs but you really just want to know the blob edges. Example Code: ./examples/MorphologyExample.py
morphologyOpen applies a morphological open operation which is effectively an erosion operation followed by a morphological dilation. This operation helps to ‘break apart’ or ‘open’ binary regions which are close together.
See: http://en.wikipedia.org/wiki/Opening_(morphology) See: http://opencv.willowgarage.com/documentation/cpp/image_filtering.html#cv-morphologyex Example Use: two part blobs are ‘sticking’ together. Example Code: ./examples/MorphologyExample.py
This function will return any text it can find using OCR on the image.
Please note that it does not handle rotation well, so if you need it in your application try to rotate and/or crop the area so that the text would be the same way a document is read
RETURNS: String
If you’re having run-time problems I feel bad for your son, I’ve got 99 problems but dependencies ain’t one:
http://code.google.com/p/tesseract-ocr/ http://code.google.com/p/python-tesseract/
Region select is similar to crop, but instead of taking a position and width and height values it simply takes to points on the image and returns the selected region. This is very helpful for creating interactive scripts that require the user to select a region.
Remove a layer from the layer stack based on the layer’s index.
This function rotates an image around a specific point by the given angle By default in “fixed” mode, the returned Image is the same dimensions as the original Image, and the contents will be scaled to fit. In “full” mode the contents retain the original size, and the Image object will scale by default, the point is the center of the image. you can also specify a scaling pa rameter
Does a fast 90 degree rotation to the right. Note that subsequent calls to this function WILL NOT keep rotating it to the right!!! This function just does a matrix transpose so following one transpose by another will just yield the original image.
Save the image to the specified filename. If no filename is provided then then it will use the filename the Image was loaded from or the last place it was saved to.
Save will implicitly render the image’s layers before saving, but the layers are not applied to the Image itself.
Scale the image to a new width and height.
If no height is provided, the width is considered a scaling value ie:
img.scale(200, 100) #scales the image to 200px x 100px
img.scale(2.0) #enlarges the image to 2x its current size
Returns: IMAGE
Given a set of new corner points in clockwise order, return a shear-ed Image that transforms the Image contents. The returned image is the same dimensions.
cornerpoints is a 2x4 array of point tuples
This function automatically pops up a window and shows the current image
Gets width and height
Returns: TUPLE
Smooth the image, by default with the Gaussian blur. If desired, additional algorithms and aperatures can be specified. Optional parameters are passed directly to OpenCV’s cv.Smooth() function.
If grayscale is true the smoothing operation is only performed on a single channel otherwise the operation is performed on each channel of the image.
Returns: IMAGE
Given number of cols and rows, splits the image into a cols x rows 2d array of cropped images
quadrants = Image(“foo.jpg”).split(2,2) <– returns a 2d array of 4 images
Split the channels of an image into RGB (not the default BGR) single parameter is whether to return the channels as grey images (default) or to return them as tinted color image
Returns: TUPLE - of 3 image objects
The stretch filter works on a greyscale image, if the image is color, it returns a greyscale image. The filter works by taking in a lower and upper threshold. Anything below the lower threshold is pushed to black (0) and anything above the upper threshold is pushed to white (255)
Returns: IMAGE
Converts image colorspace to BGR
RETURNS: Image
Converts image to Grayscale colorspace
RETURNS: Image
Converts image to HLS colorspace
RETURNS: Image
Converts image to HSV colorspace
RETURNS: Image
Converts Image colorspace to RGB
RETURNS: Image
Converts image to XYZ colorspace
RETURNS: Image
This helper function for shear performs an affine rotation using the supplied matrix. The matrix can be a either an openCV mat or an np.ndarray type. The matrix should be a 2x3
This helper function for warp performs an affine rotation using the supplied matrix. The matrix can be a either an openCV mat or an np.ndarray type. The matrix should be a 3x3
Given a new set of corner points in clockwise order, return an Image with the images contents warped to the new coordinates. The returned image will be the same size as the original image
Bases: SimpleHTTPServer.SimpleHTTPRequestHandler
The JpegStreamHandler handles requests to the threaded HTTP server. Once initialized, any request to this port will receive a multipart/replace jpeg.
The JpegStreamer class allows the user to stream a jpeg encoded file to a HTTP port. Any updates to the jpg file will automatically be pushed to the browser via multipart/replace content type.
To initialize: js = JpegStreamer()
to update: img.save(js)
to open a browser and display: import webbrowser webbrowser.open(js.url)
Note 3 optional parameters on the constructor: - port (default 8080) which sets the TCP port you need to connect to - sleep time (default 0.1) how often to update. Above 1 second seems to cause dropped connections in Google chrome
Once initialized, the buffer and sleeptime can be modified and will function properly – port will not.
The VideoStream lets you save video files in a number of different formats.
You can initialize it by specifying the file you want to output:
vs = VideoStream("hello.avi")
You can also specify a framerate, and if you want to “fill” in missed frames. So if you want to record a realtime video you may want to do this:
vs = VideoStream("myvideo.avi", 25, True) #note these are default values
Where if you want to do a stop-motion animation, you would want to turn fill off:
vs_animation = VideoStream("cartoon.avi", 15, False)
If you select a fill, the VideoStream will do its best to stay close to “real time” by duplicating frames or dropping frames when the clock doesn’t sync up with the file writes.
You can save a frame to the video by using the Image.save() function:
my_camera.getImage().save(vs)
Search for item in a list
Returns: Boolean
Determines if it is a number or not
Returns: Type
Determines if it is a tuple or not
Returns: Boolean
This function is a utility for converting numpy arrays to the cv.cvMat format.
Returns: cvMatrix
Reverses a tuple
Returns: Tuple