Industrial_Vision module#

This code package provides a Qt user interface class, along with two worker classes:

Worker1 for live video feed and image processing, and Worker2 for shape and color recognition, powered by the IAcouleurs class, itself poxered by IAFormes and Couleurim class.

1. Features
  1. Qt User Interface Class: The Qt user interface class provides an interactive and intuitive graphical interface for the Industrial Vision system. It enables users to interact with the system, configure settings, visualize real-time video feed, send information to the Nucleo card, and receive the recognition results.

  2. Worker1 Class: The Worker1 class is responsible for capturing live video feed and performing image processing operations. This class plays a vital role in preprocessing the images to ensure accurate and reliable shape and color recognition.

  3. Worker2 Class: The Worker2 class combines the power of AI models with shape and color recognition. It utilizes IAcouleur Class to perform analysis on preprocessed images.

  4. IAcouleurs: The IAcouleurs Class is a combination of IAFormes and Couleurim class that utilizes the trained machine learning model to recognize various shapes. it is also designed to identify colors in the images. It employs least squares to recognize and classify different colors with high accuracy.

class Industrial_Vision.Couleurim(Foldername='images/o1.bmp', livemode=False, Frame=None)[source]#

Bases: object

Couleurim class represents an image color analysis tool.

Parameters:
  • Foldername (str) – The path to the image file. Default is ‘images/o1.bmp’.

  • livemode (bool) – If True, uses a live frame instead of reading an image file. Default is False.

  • Frame (PIL.Image.Image) – The live frame to analyze. Only used if livemode is True.

r#

Red channel values of the image.

Type:

ndarray

g#

Green channel values of the image.

Type:

ndarray

b#

Blue channel values of the image.

Type:

ndarray

argr#

Indices of the maximum value in the red channel.

Type:

tuple

argg#

Indices of the maximum value in the green channel.

Type:

tuple

argb#

Indices of the maximum value in the blue channel.

Type:

tuple

nc#

Maximum color values for each channel.

Type:

list

__init__(Foldername='images/o1.bmp', livemode=False, Frame=None)[source]#

Initializes the Couleurim object.

Parameters:
  • Foldername (str) – The path to the image file. Default is ‘images/o1.bmp’.

  • livemode (bool) – If True, uses a live frame instead of reading an image file. Default is False.

  • Frame (PIL.Image.Image) – The live frame to analyze. Only used if livemode is True.

affichage()[source]#

Displays the image with highlighted color channels.

Note

The red channel is shown in the top-left subplot. The green channel is shown in the bottom-left subplot. The blue channel is shown in the top-right subplot. The closest color name to the analyzed color is shown in the bottom-right subplot.

couleur_proche(requested_colour=None)[source]#

Finds the closest color name to the analyzed color.

Parameters:

requested_colour (list) – RGB values of the color to find the closest name for. Default is None, which uses the analyzed color stored in nc attribute.

Returns:

The name of the closest color.

Return type:

str

class Industrial_Vision.IAFormes(Test=False)[source]#

Bases: object

IAFormes class implements a shape recognition system using a multi-layer perceptron (MLP) classifier.

Parameters:

Test (bool) – If True, initializes the classifier with a test dataset. Default is False.

__test#

Flag indicating whether the test mode is enabled.

Type:

bool

__mlp2#

Multi-layer perceptron classifier object.

Type:

MLPClassifier

x_test#

Test input data.

Type:

ndarray

y_test#

Test target labels.

Type:

ndarray

xt#

Training input data.

Type:

ndarray

yt#

Training target labels.

Type:

ndarray

Proba(xtest)[source]#

Predicts the probabilities of each class label for the given input data.

Parameters:

xtest (ndarray) – Input data to predict probabilities for.

Returns:

Array of predicted probabilities for each class label.

Return type:

ndarray

Proba2(xtest)[source]#

Predicts the class labels for the given input data.

Parameters:

xtest (ndarray) – Input data to predict labels for.

Returns:

Array of predicted labels.

Return type:

ndarray

__init__(Test=False)[source]#

Initializes the IAFormes object.

Parameters:

Test (bool) – If True, initializes the classifier with a test dataset. Default is False.

matconf()[source]#

Displays the confusion matrix based on the predicted labels and true labels of the test dataset.

Note

This method should only be called when the test mode is enabled.

montreimage(k, set)[source]#

Displays the image at index k from the given set.

Parameters:
  • k (int) – Index of the image to display.

  • set (ndarray) – Set of images.

class Industrial_Vision.IAcouleurs(livemode=True)[source]#

Bases: object

IAcouleurs class represents an image color and shape classification tool.

Parameters:

livemode (bool) – If True, enables live mode for analyzing frames. Default is True.

IA#

Instance of the IAFormes class for shape classification.

Type:

IAFormes

livemode#

Flag indicating whether the live mode is enabled or not.

Type:

bool

__init__(livemode=True)[source]#

Initializes the IAcouleurs object.

Parameters:

livemode (bool) – If True, enables live mode for analyzing frames. Default is True.

afficheinfo()[source]#

Returns the classification results and the closest color name.

Returns:

A tuple containing the classification results dictionary and the closest color name.

Return type:

tuple

aquisi(im, s)[source]#

Preprocesses an image for shape classification.

This method applies thresholding and resizing to the input image.

Parameters:
  • im (ndarray) – The grayscale image to preprocess.

  • s (int) – Threshold value for image binarization.

Returns:

Preprocessed image data as a 1D array.

Return type:

ndarray

classi2(s=50, Fichier='images/e1.bmp', Frame=None)[source]#

Performs shape classification on an image or frame.

Parameters:
  • s (int) – Threshold value for image binarization. Default is 50.

  • Fichier (str) – Path to the image file. Default is ‘images/e1.bmp’.

  • Frame (ndarray) – The live frame to classify. Only used if livemode is True.

Returns:

The index of the predicted shape class.

Return type:

int

classification(s=50, Fichier='images/e1.bmp', Frame=None)[source]#

Performs color and shape classification on an image or frame.

Parameters:
  • s (int) – Threshold value for image binarization. Default is 50.

  • Fichier (str) – Path to the image file. Default is ‘images/e1.bmp’.

  • Frame (ndarray) – The live frame to classify. Only used if livemode is True.

Returns:

A list containing the classification results. The list contains the following elements:
  • The shape classification result as a tuple (shape_name, probability).

  • The closest color name to the analyzed color.

  • The RGB values of the analyzed color.

Return type:

list

resize(image, l, m)[source]#

Resizes an image to the specified dimensions.

Parameters:
  • image (ndarray) – The image to resize.

  • l (int) – The width of the resized image.

  • m (int) – The height of the resized image.

Returns:

The resized image.

Return type:

ndarray

seuil(imNB, s=120)[source]#

Applies a threshold to an image.

Pixels with intensity values below the threshold are set to 0, and pixels with intensity values above or equal to the threshold are kept as is.

Parameters:
  • imNB (ndarray) – The grayscale image to apply the threshold to.

  • s (int) – Threshold value. Default is 120.

Returns:

The thresholded image.

Return type:

ndarray

class Industrial_Vision.Window(parent=None)[source]#

Bases: QWidget

Window class represents the main interface for displaying a video feed and processing images.

Signals:

ImageUpdate (QImage): Signal emitted when a new frame is available to update the displayed image. PrediUpdate (list): Signal emitted when a new prediction result is available.

Worker1#

Worker1 instance for capturing frames from the camera and applying filters.

Type:

Worker1

Worker2#

Worker2 instance for image processing and prediction.

Type:

Worker2

CancelFeed()[source]#

Cancels the video feed and stops the workers.

ImageUpdateSlot(Image)[source]#

Slot for updating the displayed image.

Parameters:

Image (QImage) – The new image to be displayed.

PrediUpdateSlot(pred)[source]#

Slot for updating the prediction result.

Parameters:

pred (list) – The prediction result.

__init__(parent=None)[source]#

Initializes the Window object and sets up the user interface.

Parameters:

parent (QWidget) – The parent widget. Default is None.

changeim(c)[source]#

Slot for handling changes in the image processing mode.

Parameters:

c – The new value of the image processing mode.

changet()[source]#

Slot for handling changes in the time interval.

changettr1()[source]#

Slot for handling changes in the threshold value.

fin()[source]#

Slot for stopping the workers.

marcheCl()[source]#

Slot for starting the workers and configuring the processing parameters.

traitim(val)[source]#

Slot for handling changes in the image processing mode.

Parameters:

val – The new value of the image processing mode.

class Industrial_Vision.Worker1[source]#

Bases: QThread

A worker thread class for image processing.

Signals:
  • ImageUpdate: A PyQt signal emitted when a processed image is ready to be displayed.

  • frameUpdate: A PyQt signal emitted when a frame is ready to be processed.

- timact

The time interval between processed frames in seconds.

Type:

int

- ThreadActive

Indicates whether the worker thread is active or not.

Type:

bool

- traitement

The type of image processing to be performed.

Type:

int

- th1

The first threshold value for image processing.

Type:

int

- th2

The second threshold value for image processing.

Type:

int

run()[source]#

Starts the worker thread and performs image processing.

This method runs in a loop, continuously capturing frames from the camera, processing them according to the specified image processing type, and emitting signals to update the UI with the processed frames.

Returns:

None

stop()[source]#

Stop the execution of the class.

This method sets the ‘ThreadActive’ flag to False and quits the execution of the class.

timacsses(ti)[source]#

Set the value of ‘timact’ attribute.

Parameters: ti (float): The value to set for ‘timact’.

trait(t)[source]#

Set the value of ‘traitement’ attribute.

Parameters: t (str): The value to set for ‘traitement’.

traitth1(t)[source]#

Set the value of ‘th1’ attribute.

Parameters: t (int): The value to set for ‘th1’.

traitth2(t)[source]#

Set the value of ‘th2’ attribute.

Parameters: t (int): The value to set for ‘th2’.

class Industrial_Vision.Worker2[source]#

Bases: QThread

A worker class for performing tasks in a separate thread.

This class inherits from QThread and is designed to perform certain operations in the background. It utilizes the IAcouleurs class for image color representation and shape classification using a neural network. The goal of this class is to process images, perform classification, and send classification information to a nuclei card to activate motors.

__init__()[source]#

Initialize the Worker2 class.

This method sets up the initial state and configurations for the Worker2 class. It allows the user to choose a port for communication with the card that is connected to the computer.

The Worker2 class initializes various attributes such as image arrays, a color and shape classification tool represented by the ‘IAcouleurs’ class, threshold values, and lists for storing predictions.

After a delay of 10 seconds (time for the other thread to start, so it needs to be increased if your computer runs slow...), it retrieves the available serial ports and prompts the user to select a COM port. Once the port is selected, a serial connection is established with the specified port at a baud rate of 115200.

detection()[source]#

Process the detection results and send commands to activate motors based on the classification.

This method checks the detection results stored in tab1, tab2, and tab3 lists. If any of the lists contains non-zero values, it determines the corresponding action to take based on the values. If the values in tab1 indicate a classification result, it sends a command to activate motor1. If the values in tab2 indicate a classification result, it sends a command to activate motor2. If the values in tab3 indicate a classification result, it sends a command to activate motor3.

pr(Imagecv)[source]#

Perform image processing and prediction.

This method receives an image, performs image processing and prediction using the ‘classification’ and ‘classi2’ methods of the ‘IAcouleurs’ class. It emits the prediction through the ‘PrediUpdate’ signal.

The ‘IAcouleurs’ class represents an image color and a shape classification tool using a neural network.

The goal of this function is to send the information of the classification to a nuclei card to activate motors. It determines the most likely prediction by comparing several frame predictions during the passage of an object in front of the camera.

Parameters: Imagecv (cv2.Image): The input image for prediction.

stop()[source]#

Stop the Worker2 class execution.

This method closes the serial connection with the card and stops the execution of the Worker2 class.