IRON On-Premises System
Version 2.0 – EN-US
Address and Contact Information:
Address:
Flexible Vision Inc.
8220 Arjons Dr.
San Diego CA 92126 – United States.
Internet address:
E-mail address:
contact@FlexibleVision.com
Phone:
(619) 393-9595
MANUAL INFO:
These are the original instructions. The English language is binding.
No part of this documentation may be reproduced in any form without the written permission of Flexible Vision.
History of Changes:
Product Instructions Manual Version:
Date:
Chapters:
Description:
V1.0
Mar 2024
–
First edition.
V2.0
Aug 2024
7
Revised and updated from the first edition.
Table of contents
1 LIMITED LIABILITY / WARRANTY 6
2 INTRODUCTION 7
2.1 Key Functions of AI Vision Systems 7
2.2 Iron On-PremisesSystem Features and Benefits: 7
2.2.1 Leveraging Advanced AI Features and Technologies 8
2.2.2 Seamless Integration and Extensibility 8
2.3 How the System Works 9
Collecting Images 9
Tagging the Image Set 9
Image Archive 9
Creating a Deep Learning Model 9
Running your Model 10
Storing Your Images 10
Retaining Over Time 10
Analyzing Data 10
2.4 About This Manual 10
2.5 List of Abbreviations 11
2.6 Applications by Industry 11
2.7 Intended Use 12
2.7.1 Intended Users 12
2.7.2 Unintended Use 12
2.8 Hardware/Software Architecture 12
2.9 Technical Data about the Edge Processor 14
2.10 Technology Designed for Today’s Industry 18
3 SAFETY 19
3.1 Generally Applicable Safety Rules 19
Not Allowed Use 20
3.2 Safety Decals and Markings 20
4 GETTING STARTED 20
4.1 Preparation 20
4.2 Assembly 20
Mounting options: 21
Wall Mounting 21
Book Mounting 21
Din Rail Mounting 22
Initial Setup 22
4.3 Additional Hardware Setup 24
Hardware Setup Overview 24
Cameras 24
Compatible Cameras: 24
FAST Block 25
Stacklights 26
Lenses 26
Lighting 26
5 OPERATION 27
5.1 Initial Setup 27
5.1.1 Creating an Account 27
5.1.2 Linking your Device to the Internet 27
Method 1: 27
Method 2: 28
5.1.3 Linking your Device to your Account 28
5.2 Flexible Vision Onprem Software Explanation 29
5.2.1 Dashboard 30
5.2.2 Camera Details 31
5.2.3 Live Video 32
5.2.4 Camera Settings 33
Common Camera Settings 33
Crop Region of Interest 34
Advanced Camera Properties 35
5.2.5 Camera Calibration 35
Step 1: Optical Distortion Calibration 35
Step 2: Real-World Coordinate Calibration 37
5.2.6 Resolution 40
5.2.7 Time Machine 41
5.2.8 Snap & Find 42
5.2.9 Masking 44
5.2.10 I/O Presets 46
5.2.11 Settings 50
5.2.12 Node Creator 51
5.2.13 My Account 52
5.2.14 API Docs 53
Capture and Detection API 53
Endpoints: 53
Vision API 53
Endpoints: 53
How to Access the API Docs 53
How to Use the APIs 54
Authorization 54
How to Log In 54
6 HOW TO GET STARTED QUICKLY 55
6.1 Resource Videos 55
6.2 Other Helpful Videos 58
6.3 Remote Control with TeamViewer 60
6.4 Resource Files 60
Instructions to Download Calibration Grid Files: 60
7 PROCESSOR SPECIFICATIONS 61
Manual Update
This manual is the second edition (V2.0), revised and updated from the initial version.
LIMITED LIABILITY / WARRANTY
Always adhere to the recommendations and warnings in this manual. They are the recommendations of Flexible Vision products and several essential component manufacturers. If you ignore these recommendations, there will be no warranty.
Repair work carried out under warranty is free of charge. Our responsibility is limited to repair or, if we deem it necessary, replacement free of charge.
Any warranty claim will be rejected if you exceed the maintenance intervals for the essential components.
All adverse consequences may be directly or indirectly related to the misuse of the supplied instrument or its use.
Warranty will only be approved if:
The product is installed, operated, and maintained according to our instructions
The system has not been opened or tampered with.
Our warranty does not cover the following events:
Damage due to incorrect handling, failure to observe the instruction manual, or attempts by any non-qualified party to repair the instrument.
Installation of any third-party software not approved by us.
NOTICE:
Afbeelding met tekening
Automatisch gegenereerde beschrijving
Do not open the processor housing or the power supply. Breaking the seals invalidates the warranty.
INTRODUCTION
The AI Flexible Vision Iron On-PremisesSystem is designed to perform image-capturing and inference on factory floors. This high-quality AI vision system integrates seamlessly into production environments to support various functions.
Key Functions of AI Vision Systems
Quality Control: AI vision systems inspect products in real time to detect defects and irregularities. This helps maintain high-quality standards and reduces waste by identifying issues early in the production process.
Process Optimization: By analyzing visual data, AI vision identifies areas for improvement, such as optimizing workflow, reducing bottlenecks, and minimizing downtime.
Predictive Maintenance: AI analyzes visual data from equipment to predict potential failures before they occur, reducing unintended downtime and improving overall equipment effectiveness (OEE).
Safety Monitoring: The system monitors the production environment for safety hazards, detects unsafe behaviors, identifies potential risks, and issues real-time alerts to ensure compliance with safety regulations.
Inventory Management: AI vision tracks inventory levels and stock movements, helping to maintain optimal inventory levels, reduce stockouts, and prevent overstock situations.
Robotic Automation: AI vision enables precise and accurate robotic tasks, including pick-and-place operations, assembly, welding, and packaging, leading to increased productivity and cost savings.
Data Analytics: AI vision analyzes visual data to provide insights for decision-making, identifying trends, analyzing performance metrics, and optimizing resource allocation.
Iron On-PremisesSystem Features and Benefits:
Integration and Deployment: The Iron On-PremisesSystem integrates easily with existing factory setups, such as PLS or MES systems, offering a straightforward installation process.
Connectivity Options: The system supports both wired and wireless connectivity, allowing flexibility in development across different network infrastructures.
Visual Inspection and Data Storage: The system captures and stores visual inspection data in the cloud, allowing teams to collaborate and share information across factory floors.
Event Logging and Sharing: Logged events can be pushed to the cloud and shared with other users, facilitating collaboration and addressing quality issues promptly.
Lot Code and Label Verification: Ensures authenticity by verifying lot codes and labels, guarding against counterfeit products, and securing trust in each product’s origin and quality.
Time Machine: Captures video replays of events, such as detected defects or discrepancies in part counts. Users can define the time window for video capture before and after an event and save these videos to the cloud for further analysis.
Robust Design and Durability: The system is built to perform reliably in challenging environmental conditions, including extreme temperatures and rigorous industrial settings, ensuring consistent operation.
Scalability and Expansion: The hardware can adapt to your requirements, such as adding more cameras or integrating with additional devices, ensuring your investment remains future-proof and grows with your business.
How the System Works
Flexible Vision’s machine vision systems utilize reinforcement learning for continuous improvement via trial and error.
Flexible Vision also provides a suite of post-process tools to enhance the analysis and usability of inspection data. These tools include the analytics dashboard on the Reporting page of the cloud-based UI, the image archive, the node creator programming environment, and the ability to update and retrain models.
Traditionally, systems fail due to inflexibility, scalability issues, limited data utilization, and inadequate error handling. Flexible Vision overcomes these challenges with dynamic, learning-based approaches that evolve with the needs of your business.
Collecting Images
The software can be used to easily collect a series of 5-10 images, documenting both excellent and subpar products. The sample size can be optionally increased with augmentation, allowing your team to target issues and successes in one place.
Tagging the Image Set
Using the cloud-based UI, you can tag images with areas of interest, making it simple to organize and locate your factory images. This tagging system enhances collaboration and makes managing different image types more efficient.
Image Archive
All images and detections are saved in the Image Archive. You can search for specific tags, OCR, UPC, or detection IDs using custom filters. You can find detailed information on each tag detection in the raw JSON formatted data stored with each image. This can be used for further analytics and process improvement.
Creating a Deep Learning Model
To gain comprehensive insights into your project, you can create a deep learning model. This model is created in your cloud portal. Your model will be created and ready for production within minutes and shared with team members and end users.
Running your Model
After creation, your AI model will automatically deploy and prepare for validation. You can download and sync your model to as many on-premise production lines as necessary. Our cloud-based software makes it simple to add production lines later on if needed.
Storing Your Images
Image storage is an advantageous feature that gives you a trackable and traceable timeline for your product. In case of error or recall, you can reference your cloud folder. All images will automatically sync to your cloud storage account. If you need to locate a file, you can search for images by date, time, or serial number.
Retaining Over Time
As your system collects more images, you may find new inspection criteria. The software allows for constant growth and progress, automatically including new images and tags in your next model. This adaptability ensures that the system evolves with your business needs.
Analyzing Data
The Reporting page in the cloud portal displays various statistics for detected tags, accuracy, and speed. Filters for projects and date ranges can be customized to display up-to-date information. Powerful search and filter tools enable users to analyze data and gain valuable insights into their processes.
About This Manual
This Installation manual belongs to the owner of the Flexible Vision AI Iron On-PremisesSystem.
Read this Operator’s Manual before operating the system
Afbeelding met monitor, scherm, televisie
Automatisch gegenereerde beschrijving
Pay special attention to the safety precautions in this manual, as they prevent accidents causing personal injury or device damage.
This manual describes characteristics unique to your AI vision system.
In this manual, safety is a priority. By reading it, you will become aware of the dangers and hazards of operating this system.
Using This Manual
This manual was created to help familiarize you with safety, assembly, operation, adjustments, maintenance, and troubleshooting.
The information contained within this manual was current at the time of printing.
Some parts may change slightly to ensure you have the best performance. To check for updates, refer to our website at https://www.flexiblevision.com or call (619) 393-9595 to speak to one of our representatives.
List of Abbreviations
FV: Flexible Vision.
ONPREM: On-Premise.
AI: Artificial Intelligence.
DVI: Digital Visual Interface.
ATX: Advanced Technology Extended.
HDD: Hard Disk Drive.
LAN: Local Area Network.
POE: Power Over Ethernet
HMI: Human Machine Interface
Applications by Industry
Food Packaging
Flexible Vision offers comprehensive solutions for the food packaging industry, including the detection of label damage, placement errors, and defects. It can also identify liquid coloration variance, monitor liquid levels, detect debris, and verify seals (both thermal and twist). Additionally, the system provides label matching, inventory management, and quality control for parcels. These capabilities help ensure the integrity and safety of packaged food products.
Logistics
In the logistics sector, accurate labeling and defect detection are critical for efficient operations. Flexible Vision’s advanced imaging technology ensures precise labeling, identifies defects and damage, verifies package weight, and aids in inventory management. This helps streamline logistics processes, reducing errors and improving overall efficiency.
Automotive
Flexible Vision supports various applications in automotive manufacturing by providing expert detection and analytics systems. It can be deployed across multiple assembly processes, assisting in quality control and enhancing production efficiency. The system’s capabilities include monitoring assembly line components and detecting defects or anomalies.
General Industrial
For general industrial applications, Flexible Vision offers seamless integration into assembly environments. The system enables rapid deployment, accurate predictions, and easy integration with diverse camera systems. Key features include the ability to quickly add training parameters, detect scratches, and maintain robust quality control tracking for both robotic and operator-related tasks in dynamic assembly settings.
Medical Devices
In the pharmaceutical and medical device industries, quality control is paramount. Flexible Vision assists in ensuring the quality and compliance of packaging, barcodes, fluid colors, and other critical elements. The system provides rapid detection and precise issue tracing, helping to resolve manufacturing issues efficiently and maintain high standards of safety and accuracy.
Semiconductor Environments
Flexible Vision is well-suited for semiconductor manufacturing environments, including cleanroom settings. It integrates easily with workstation setups and offers versatile detection capabilities for tasks such as X-ray inspection and soldering quality control. The system supports dynamic assembly processes and helps maintain strict quality standards.
Intended Use
The Flexible Vision Iron On-PremisesSystem is intended as an all-in-one inspection system for applications in robot guidance and quality assurance.
Intended Users
Intended users are well-trained, technically skilled operators, including process engineers and automation integrators.
Intended Environment
Operation at temperatures below -25°C or above 70°C.
Storage at temperatures below -30°C or above 85°C.
Clean and dry work areas
Hardware/Software Architecture
A diagram of software architecture
Description automatically generated
Technical Data about the Edge Processor
DC IN: Power input for GPU and POE card.
Digital I/O Terminal Block: Supports 8 digital inputs and 8 digital outputs.
COM1 and COM2 Ports: Support RS232/422/485 serial devices.
DVI-I Port: Used to connect a DVI monitor or an optional split cable for dual display mode.
DisplayPort: Used to connect a DisplayPort monitor.
USB 3.2 Gen 2 Port (10 Gbps): Used to connect USB 3.2 devices.
LAN Port: Used to connect the system to a local area network (non POE).
ATX Power On/Off Switch: Press to power on or off the system
HDD Port: Removable 2.5″ SATA HDD Area.
Antenna Hole: Area antenna connector to connect an antenna for the WiFi module.
DC_IN1: 3-Pin DC Power Input Connector
Pin
Definition
Picture
A close-up of a chain
Description automatically generated
1
+9 – 48 V-DC
2
–
3
GND
DC_IN2: 4-Pin DC Power Input Connector
Picture
DIO1: Digital Input / Output Connector
Connector type: Terminal Block 2X9 18-pin, 3,5 mm pitch
Pin
Definition
Pin
Definition
GPIO Connector Diagram
A black and white diagram of a rectangular object with numbers
Description automatically generated
18
GND
17
DC INPUT
16
D08
15
DI8
14
D07
13
DI7
12
D06
11
DI6
10
D05
9
DI5
8
D04
7
DI4
6
D03
5
DI3
4
D02
3
DI2
2
D01
1
DI1
A diagram of a circuit
Description automatically generated
A diagram of a circuit
Description automatically generated
* Note: Only NPN wiring is supported
Example Input Wiring
Example Output Wiring
A diagram of a circuit board
Description automatically generated
A diagram of a circuit board
Description automatically generated
SAFETY
Safety Warning Messages in this Manual
Read all safety warnings and all instructions. Failure to follow the warnings and instructions may result in electric shock, fire, or severe injury.
Save all warnings and instructions for future reference.
DANGER!
A red and white sign
Description automatically generated with low confidence
This symbol indicates DANGER, a hazardous situation that, if not avoided, will result in death or severe injury.
This RED symbol indicates All DANGER notifications.
WARNING!
Afbeelding met teken, buiten
Automatisch gegenereerde beschrijving
This symbol indicates a hazardous situation that could result in death or severe injury if not avoided.
This ORANGE symbol indicates All WARNING notifications.
CAUTION!
A yellow and black sign
Description automatically generated with low confidence
This symbol, used with the safety alert symbol, indicates a hazardous situation that, if not avoided, could result in minor or moderate injury.
This YELLOW symbol indicates All CAUTION notifications.
WARNING!
Afbeelding met teken, buiten
Automatisch gegenereerde beschrijving
Read and understand all safety measures and warnings before operating or adjusting.
Untrained persons who have not thoroughly read and understood this manual might never approach the system.
Refer to all safety decals in this manual. Read all instructions noted on them.
Generally Applicable Safety Rules
A warning sign with black text
Description automatically generated
This list contains general safety measures that must be complied with for personal safety reasons:
Only persons who have read and understood the operating instructions might be in the system’s vicinity.
Unauthorized persons may not perform acts if the system is in use.
Do not remove the warning stickers or markings brought on the components.
Always clean the equipment properly and keep the workplace free of dirt and obstacles.
Always ensure sufficient ambient lighting. Operate only in daylight.
Not Allowed Use
Using the system for activities other than AI vision inspection is not permitted.
Modification of the equipment in any way is not allowed.
Safety Decals and Markings
Your equipment comes equipped with all safety decals. The decals are designed to help you safely operate your system. Read and follow their directions.
Keep all safety decals clean and legible.
Safety Decals:
A yellow triangle sign with black border
Description automatically generated
WARNING Decal on all Cameras.
HOT SURFACE
Owner Assistance
If customer service or spare parts are required, please contact Flexible Vision Inc.
Our company has trained technicians, spare parts, and equipment to service your device.
We strive to make technology tangible and accessible for clients of all backgrounds.
The parts on your system are specially designed; always replace these parts with genuine parts. Please reference your order number or serial number when calling.
GETTING STARTED
Preparation
Ensure you understand all safety rules and warnings in this manual.
Ensure you understand all safety decals on the equipment.
Adhere to PPE safety regulations.
Assembly
Hardware Key Components
Camera(s).
Lens(es).
Lighting.
Edge Processor with mounting kit.
Turn Key System (Optional).
Mounting options:
Wall Mounting
The wall mount kit is available for the VCO-6000 series and is included in the standard package.
For wall mounting, lock the wall mount kit to the bottom side of the system using six screws.
Wall mounting kit, including two brackets and six fasteners
Book Mounting
For book mounting, use the same wall mount kit in the standard package.
Lock the wall mount kit onto the rear side of the system using six screws.
Din Rail Mounting
The din rail mount kit is available for the VCO-6000 series as an optional accessory.
To install, lock the din rail mount onto the rear panel of the system using two screws.
Initial Setup
Plug in power converter cables as shown below.
Plug the corresponding cable into the DC_IN1 port:
Plug the corresponding cable into the DC_IN2 port:
Reference the following diagram to finish setting up the processor, display, and camera.
Connect the camera either to the dedicated PoE port as shown, or to a USB port, depending on the camera type.
Turn the system on by pressing the gray power button located on the side of the system.
Login to the system using the default password ‘fvonprem’.
Additional Hardware Setup
Hardware Setup Overview
Cameras
Compatible Cameras:
GigE: Omron, Teledyne
USB3: Blackfly FLIR, Omron, and most USB web cameras
Camera Hirose Cable
Pin No.
Signal Name
IN / OUT
Voltage
Diagram
1
POWER IN
IN
+ 10.8 to +26.4 VDC
2
Opto-isolated in (Line0)
IN
Low: Smaller than +1.0 V
High: +3.0 to +26.4 V
*Potential difference between TRG_in and Opto-isolated Common
3
Open Collector GPIO (Line2)
IN / OUT
+3.0 to +26.4 V / Open Collector
4
Opto-isolated Out (Line1)
OUT
Open Collector
5
Opto-isolated Common
IN
6
GND
IN
0 V
FAST Block
The FAST Block is built for high speed hardware triggering and strobing.
To trigger a camera using a sensor with the FAST Block, connect a camera and PNP sensor as shown in the diagram below.
You can also connect a light as shown to trigger a strobe when capturing the image.
PNP Sensor: **Note the use of a PNP sensor in the diagram below. In a PNP sensor, the output signal is connected to the positive voltage source. This means that when an object is detected by the sensor, the signal goes high. This is different from an NPN sensor, in which the signal goes low and is connected to ground when an object is detected.
Stacklights
An indicator stacklight, also known as a signal tower or indicator light, is used in industrial applications to provide visual and sometimes audible signals about the status of a machine or process. These stacklights are typically mounted on equipment and consist of multiple colored segments, each representing a different condition such as operational status, warnings, errors, or maintenance needs. For example, a green light might indicate normal operation, while a red light could signal a machine fault or emergency. Stacklights help operators and maintenance personnel quickly assess the state of equipment, improving safety and efficiency on the production floor. You can program your stacklight to change colors within the node-creator tab of the software
Lenses
Lenses in industrial machine vision are critical components used to capture images of objects for inspection, measurement, or identification processes. These lenses focus light onto the image sensor, ensuring that the captured image is clear and accurate. The choice of lens affects the field of view, resolution, depth of field, and magnification, which are crucial for precisely analyzing objects at various distances and sizes. In industrial applications, lenses are selected based on specific requirements such as the type of object being inspected, the required level of detail, and the working environment.
Lighting
Camera lighting in industrial applications is used to illuminate objects for machine vision systems, ensuring that images are captured with sufficient clarity, contrast, and detail for accurate analysis. Proper lighting is crucial for reducing shadows, enhancing features, and minimizing reflections or glare, which can affect the performance of image processing and inspection tasks.
Ring Lights: Positioned around the camera lens, ring lights provide uniform, shadow-free illumination directly onto the object, ideal for highlighting surface details and reducing glare
Low-Angle Lights: Also known as dark field illumination, these lights are positioned at a low angle to the object surface, enhancing the visibility of surface textures, edges, and defects by creating shadows.
Backlighting: This type of lighting is placed behind the object, creating a silhouette or enhancing the visibility of transparent or translucent objects. It is often used for measuring dimensions or detecting the presence of objects.
Bar Lights: These linear lights are used to illuminate long or large objects, providing even lighting across the entire field of view. They are often used in conveyor systems or where wide area coverage is needed.
Diffuse Dome Lights: These lights provide soft, even illumination by diffusing light from all directions, which reduces reflections and highlights surface features on curved or shiny objects.
OPERATION
Initial Setup
You can link the Onprem device to your cloud account after successfully powering up the system and installing a camera.
Creating an Account
From a computer, visit: www.FlexibleVision.com
Click on the ‘LAUNCH APP’ button at the top of the page.
In the top right corner, click on ‘Sign up’.
On the following screen, enter an email address and password to sign up. Alternatively, you can sign up by continuing with a Google or Apple account.
Linking your Device to the Internet
Method 1:
This method is for systems with no monitor connected; for systems with a monitor, use Method 2.
Power on your Flexible Vision on-prem device.
Connect to the ‘vision_cell’ Wifi hotspot from your laptop using ‘Password’ as the password.
Open up a browser session using Chrome. Type 192.168.12.1 into the address bar.
It is the default IP address of the Flexible Vision System.
Enable pop-ups in the Chrome menu bar.
From the prompt, select your factory SSID and enter the password.
Once connected, the assigned IP address appears at the bottom of the prompt.
From your laptop, connect to the same SSID Wifi network as the Flexible Vision On-prem device.
Method 2:
Power on your Flexible Vision on-prem device.
Follow the steps on the screen to connect to the internet.
Linking your Device to your Account
Either click on the assigned IP address of the device or enter it into the address bar.
In the second pop-up tab, you are prompted with a confirmation screen. Click ‘Confirm’.
You are prompted with a login screen. Enter your Flexible Vision credentials.
Your device is now linked to your account.
Flexible Vision Onprem Software Explanation
Camera Details, Adjusting, Calibration, Settings, and more
Watch the video lessons mentioned in Chapter 6 to understand and apply the software’s capabilities thoroughly.
Start the Flexible Vision Onprem software and log in.
On the left side, you will find the Task List.
Interface Tab Overview:
Home:
Dashboard
Cameras:
Camera Details
Live Video
Camera Settings
Calibration
Time Machine
Image Processing:
Snap & Find
Masking
Communication:
I/O Presets
System:
Settings
Node Creator
My Account
API Docs
Dashboard
The Dashboard contains widgets that display recent activity from your programs. You can customize the dashboard by clicking and dragging the widgets to the location you want. You can also further customize what you want the dashboard to display using the Node Creator.
A screenshot of a computer
Description automatically generated
Recent predictions: This widget shows images that have been processed recently. Click on an image to view more detailed information about detections and access the raw data.
Hide/Show Detections: Toggle to turn on or off detection boxes shown on captured images.
Show Only Failures: Turn this on to only see failures that are detected.
Process Time: This widget displays the process time of recent detections. It also provides the total number of objects found, as well as the average confidence level for recent detections.
Recent Activities: This widget shows a detailed log of recent activities at your workstations.
Analytics: This widget displays a plot of recent detections.
Model: Choose to display only detections from a particular model or select multiple models.
Workstation: Choose which workstation(s) you want to see detections from.
Camera Details
In the Camera Details tab, you can view information about your cameras and their statuses.
Click on the three dots below ‘Actions’ next to a given camera to view the camera video feed, change the camera settings, or calibrate the camera.
If you do not see your camera after you have configured it, press the ‘Refresh Cameras’ button. On initial setup, you may need to set the IP address of your camera in-order to see it on this list. Reference the GiGE settings page for further information.
A screenshot of a computer
Description automatically generated
Displayed in this tab:
Camera name
Height (px)
Width (px)
Calibrated Status
Offset Angle
Image Midpoint (px)
Pixels/mm
Actions
Live Video
In the Live Video tab, you can view the live video feed of your cameras. You can also snap images to upload directly to a project’s dataset or download to your device.
A close-up of a computer screen
Description automatically generated
How to snap images to download or add to a project:
Select a camera from the drop-down list at the top of the screen.
Press the Camera icon to take a snapshot.
The images you capture will appear on the right in the Snap Selection menu.
If you would like to download images, select all of the images you would like to download by clicking the checkbox above each image, then click the Download button.
To select all images, hover over the checkbox and click ‘Select All’.
To add images to an existing project, select the images you would like to add, then select your project from the drop-down menu under ‘Select Project’ and press the Upload button.
To create a new project, select the images you would like to add to the new project, then click the Create & upload button. Give the project a name then click ‘Create & Upload’.
Camera Settings
In the Camera Settings, you can view and configure the settings of your cameras. Select Camera Settings from the Task List and choose the camera you want to see the settings for from the drop-down list. To save any changes made to the settings, press the save settings button.
Common Camera Settings
To change the basic camera settings, use the menu shown below. You can change the exposure time, gain, and frame rate using either the slider on the left or by entering a value on the right.
Camera Name: Enter the name to be displayed for your camera here.
Exposure Time: Sets the exposure time, in µs, when ExposureMode is Timed and ExposureAuto is Off. These settings can be found in the Advanced Camera Properties menu under Acquisition Control.
Gain: Controls the selected gain as an absolute physical value.
Frame Rate: Controls the acquisition rate (in Hz) at which frames are captured.
Camera Hardware Trigger: Turn on the camera hardware trigger when using a high-speed input directly to the camera. You can use the FAST Block with the camera for seamless triggering.
Crop Region of Interest
To create a region of interest, change any of the settings below using the sliders on the left or by entering values on the right. Alternatively, you can click and drag a box directly on the camera video feed to select a region of interest. Press Set Region to apply changes and the camera video feed will update to display only the region of interest. Press Clear Region to reset any changes made.
Region Width and Region Height: These change the effective width and height of the sensor in pixels. You can adjust these values to change the size of the region viewed through the camera.
X Offset and Y Offset: These change the horizontal and vertical offset, in pixels, from the origin to the region of interest.
Advanced Camera Properties
To view or change advanced camera settings, press the change advanced properties button under Advanced Camera Properties. Inside any of the drop-down menus, select a setting to view or change it.
Displayed settings:
Device Control
Image Format Control
Acquisition Control
Analog Control
LUT Control
Digital IO Control
Logic Block Control
Software Signal Control
Counter and Timer Control
Event Control
User Set Control
Chunk Data Control
Action Control
File Access Control
Test Control
Transport Layer Control
Technology Agnostic Do Not Save
Camera Calibration
You can calibrate your cameras from the Calibration page. There are two steps to complete for calibration:
Step 1 removes any lens distortion and calculates the number of pixels per millimeter.
Step 2 sets the angle and midpoint when sending coordinates to a robot for pick-and-place guidance.
Note: you will need to download and print out the Flexible Vision calibration targets. This can be done by clicking the links on the Calibration page or using the instructions in Section 6.3. When printing, ensure scaling is disabled in your PDF print settings.
Step 1: Optical Distortion Calibration
Go to the Calibration tab in the Task List on the left-hand side and select the camera you would like to calibrate from the drop-down list.
Place the calibration target in the field of view of the camera. Make sure that the plane where the grid is placed is parallel to the camera sensor’s plane.
Enter the number of rows and columns and the checker width into the corresponding fields. Make sure to use the correct checker width based on the target you are using.
Using the camera icon button, capture five images: one with the grid in the center of the camera view and one with the grid in each of the four corners. After you capture five images, the camera will automatically calibrate.
The order in which you capture the images does not affect calibration.
Make sure that the entire grid is visible in the camera view when capturing images.
To verify the calibration was successful, navigate to the Camera Details tab, where you should see a pixel/mm conversion now listed for your selected camera.
Step 2: Real-World Coordinate Calibration
Place the QR code rotation target within the camera field of view.
Depending on where you place the target and what angle you have it with respect to the camera’s horizontal and vertical field of view, the software will place the midpoint about the cross and apply an angle of rotation equal to the rotational displacement of the target.
Press the camera icon button when you are ready to set the angle and midpoint. To verify the calibration was successful, navigate to the Camera Details tab, where you should see an offset angle and midpoint now listed for your selected camera.
Calibration Point Overview:
The green dots represent where the system found calibration data.
For robot calibration, these are the locations you will touch off for a frame calibration.
A qr code with arrows pointing to the direction
Description automatically generated
Resolution
Adjusting the resolution of your camera is helpful when your application calls for higher speed or higher accuracy.
How to Change the Resolution:
Go to the Camera Settings tab in the Task List and select a camera from the drop-down list at the top of the screen.
A screenshot of a computer
Description automatically generated
Select a pre-specified resolution option from the second drop-down list on the right.
A computer chip on a white background
Description automatically generated
Time Machine
The Time Machine is a tool that allows you to observe what occurred before or after a failure was detected. It is useful in determining the cause of a failure and preventing other failures in the future.
In the Time Machine tab, you can view a timeline of Time Machine events that have occurred.
Click on a green event bubble on the Events Timeline to view a video of the event and see when it occurred.
In this tab, you can also see the live video feed from any cameras that are currently streaming.
Underneath the camera that you would like to use to record a Time Machine Event, you can choose the length of video you want the camera to record when an event is triggered.
Setting Up the Time Machine
First, enable the Time Machine by going to the Settings tab, clicking on the Enable Features section, then pressing the ‘Enable Time Machine’ button under Configure Time Machine.
Here, you can set the maximum number of days of video archiving and the quality of the stream that is recorded.
Go to I/O Presets and create a preset for the model you would like to use. In the dropdown menu for I/O Trigger, select ‘Push’ under Camera I/O.
Choose a camera you would like to use to trigger a Time Machine Event, then go to Camera Settings, and toggle on ‘Camera Hardware Trigger’ for that camera.
When using the FAST Block, plug this camera into the FAST Block, along with a sensor that will be used to trigger the camera.
When this camera detects a failure, it will trigger a Time Machine Event.
Snap & Find
Snap & Find is a great resource for validating a vision process. This tab presents a live stream of the selected camera and shows detections made using the selected model.
You can optionally capture the current image frame for traceability.
A screenshot of a computer
Description automatically generated
How to verify the results of a program:
Select a preset (optional) or select a camera, model, model type, and version from the drop-down menus.
The camera’s live video feed will appear, along with any detections made using your model.
Press the Camera icon to take a snapshot. This will allow you to see Pass/Fail results, OCR and barcode information, processing time, and the results of any other inspection tools your project uses.
This information can also be viewed on the Dashboard and the results are automatically archived to the cloud for future reference.
You can also get the raw data as a JSON object by pressing the ‘Raw Data’ button, which can be used in the Node Creator for your custom application.
Example:
A screenshot of a computer
Description automatically generated
Upload and Process
You can also upload image files in most formats. After each upload, the results will automatically be logged for traceability on your local USB drive. These results will show in your cloud account upon the next sync interval if subscribed.
How to Upload and Process an Image File:
Select a model name, type, and version.
Click the toggle next to ‘Show Uploader’ so the framed upload area appears.
Either drag and drop a file onto the framed area or click on the framed area and search on your computer for the image file to be processed.
Processed images will appear below the uploader. Click on a processed image to view detailed information about detections and to get the raw data.
Masking
A screenshot of a black screen
Description automatically generated
The Camera Masking tool blocks parts of a camera’s field of view during prediction.
This tool helps focus the prediction to occur in a specific location of the image.
A close-up of a circuit board
Description automatically generated
Toolbar Overview:
Drawing tool pallet box, ellipse, polygons, and lines.
Undo.
Clear all masks from the current image.
Save.
Show a list of saved masks.
How to Create a Mask:
Select a camera from the first drop-down menu.
Select a masking drawing tool.
Click once on the image to select your start point.
Click again on the image to select the end point of your drawing tool.
When using the polygon tool, to terminate your drawing session, roll your mouse back to the beginning point where you will see a green dot, then click the green dot once.
How to Save a Mask:
Select the Save icon.
A screenshot of a computer screen
Description automatically generated
Give your mask a unique name, then click ‘Save.’
I/O Presets
Presets are used to run complex vision operations with a simple hardware or software trigger. They are useful for Snap & Find and in the Node Creator.
A screenshot of a computer
Description automatically generated
How to create a preset:
Select the method of how your preset will be triggered from the first drop-down list. This can be a physical digital input or a TCP/IP command.
The ‘Run’ and ‘Push’ Camera I/O options can be used with a digital input connected to the FAST Block when the camera is in trigger mode.
Select a model from the second drop-down list.
If you do not see your model listed, ensure you have synced your models in the Settings tab.
Select the model type from the third drop-down list.
You can select between the high speed or high accuracy version of your model; a model can be trained on either high speed or high accuracy in the training process, or both.
Select a version of your model from the fourth drop-down list. The version numbers will appear in descending order from newest to oldest, so the latest version will appear at the top of the list.
Optionally, select a program you would like to run from the fifth drop-down list.
If you do not see your program listed, ensure that you have synced your models in the Settings tab or you can click the “pull programs” button at the top of this page to pull the latest programs from your cloud portal
Select the desired camera from the sixth drop-down list.
Optionally, select a mask from the fifth drop-down list to use with this specific model during processing.
Enter the minimum confidence score you would like to be displayed for detections. This determines the lowest tag score that renders on the image during post-processing.
You can use Inference Logging to choose whether you would like to log and store the results of the inspection.
Turn on Time Machine Event if you would like to record a Time Machine event when a failure is detected.
Press the ‘Save’ button on the right-hand side to save your preset. If you would like to start over, press the ‘Clear’ icon.
If desired, you can create up to 10 presets, all independent from each other.
Run a Preset with TCP/IP Protocol
You should connect the host device and initiate the request to the configured IP address
as specified on the settings page and port 5300. Example: 192.168.1.101:5300.
Once connected, the host device can send the string {“cmd1”:{}} to the Vision processor.
The Vision processor will respond with something similar to the following:
A number and numbers on a white background
Description automatically generated
The host device can also send a unique ID along with the message.
It is handy for linking logged images to a vehicle VIN, a device serial number, or a specific image for traceability purposes. The term “did” stands for Detection Identification
To use this feature, send the string {“cmd1”: {“did”: “MySN123”}}, where ‘MySN123’ is replaced with your unique ID for Detection.
The returning result and the cloud sync contain this ‘did’ data so that you can search for it in the future.
Help: To fetch a list of commands, type ‘help’ to the tcp request.
Running a Preset with Digital I/O:
To trigger the vision processor, you must wire in a sensor or pushbutton to the Input ports on the system.
The inputs and outputs are predefined, and you cannot remap them. You can use inputs 1-8 to trigger a program preset.
Outputs for pass, fail, and processing status are predefined for easy integration.
Once an input is triggered, the busy output will turn on while the system is in process.
Once processing is complete, the busy signal will turn off, and the ready signal will turn on.
If a program contains pass or fail criteria, the pass or fail output will turn on for one second after processing.
The following timing chart describes this process in more detail:
A line of white rectangular shapes
Description automatically generated with medium confidence
Controls
Digital I/O Status and Control
The status of outputs and inputs can be found in the Settings tab, under GPIO Status.
A screenshot of a computer
Description automatically generated
The I/O status and controls are available for visualization and control.
You can jog the outputs and read the inputs. These functions are typically used when texting communication between mating automation equipment.
A screenshot of a computer
Description automatically generated
The TCP/IP settings allow control over the data type sent in response to the TCP/IP trigger command.
Updates are saved automatically after each change.
You can send TCP/IP commands over port 5300.
Settings
In the Settings tab, you can view and update the system settings.
A white rectangular object with a black border
Description automatically generated with medium confidence
GPIO Status: View the status of inputs and outputs.
TCP/IP: Configure the data type sent in response to a TCP/IP trigger command.
Enable Features: Enable and configure the Time Machine and FTP server.
Gig-E Camera: View your camera IP information and set the IP address of your cameras.
Network:
Connect to a new wireless network.
View the WiFi-assigned IP address.
View and set the LAN IP addresses.
System:
Change your user role and combination, workstation name, and hardware name.
View your account information.
Update the system and see the current version number
Restart or shut down the system
Deauthorize the system.
Sync:
Sync your models: Press the ‘Sync Models’ button to pull and update all your cloud training on your device.
Upload programs
Set a custom time interval to sync analytics with the cloud.
Set the time interval to clear locally stored analytics.
Node Creator
The Node Creator is a programming tool for wiring together hardware devices, APIs, and online and offline services as part of the IOT and logic controls. It provides a browser-based flow editor that makes it easy to build programs using a visual programming language.
To create a simple flow program for talking to a PLC, database, and MES system with, you would:
Drag and drop nodes from the palette onto the workspace. Nodes represent different functions or devices.
Connect the nodes together with wires to create a flow. The wires represent the data flow between the nodes.
Configure the properties of each node. This includes things like the IP address of the PLC, the database table name, and the MES system endpoint.
Deploy the flow to the Node Creator runtime. This will start the flow and allow it to interact with the PLC, database, and MES system.
Flexible Vision Nodes
Cloud Upload
OnPrem Upload
OnPrem Snap
Open Camera
Configure Camera
Get Image
Release Camera
Detect Motion
Update Inference Object
Get Pin
Set Pin
Manipulate Image
Display Image
My Account
Through My Account, you can access your Flexible Vision cloud account. In the cloud, you can view and create projects, train models, review system analytics of past detection processes, and manage your account.
You can access a step-by-step guide to creating models and utilizing the Flexible Vision cloud by clicking on the ‘Training Center’ tab on the left side of the screen, or by navigating to https://www.flexiblevision.com/resources/.
API Docs
The Flexible Vision APIs are a great resource for creating custom workflows according to your specific needs and applications. They can also be used to integrate advanced vision capabilities into existing systems.
Capture and Detection API
The Capture and Detection API provides operations for image capture and object detection.
Endpoints:
query
annotations
audit
auth
branching
camera
corpus
organizations
permissions
predict
programs
project
shop
tags
devices
jobs
mask
models
train
users
validate
Vision API
The Vision API handles industrial camera acquisition and is for use with vision cameras. This API provides operations related to image capture for prediction, camera settings, and time machine events.
Endpoints:
vision
time_machine
How to Access the API Docs
To access the Capture and Detection API docs, click the API Docs tab in the Task List on the left-hand side of the screen. You can also access these API docs by typing localhost:5000/api/capture/ into your browser on the system.
The Vision API docs can be accessed by either of the following methods:
Type the IP address of the system into your browser, followed by :5555/api/vision/
If you are using the system, type localhost:5555/api/vision/ into your browser.
How to Use the APIs
One way to use the APIs is from the API docs page.
On the API docs page, click on an endpoint to expand it, then click ‘Try it out’ in the top right corner.
Fill out any required fields and click ‘Execute.’
The APIs can also be used with the HTTP nodes in the Node Creator.
http in: You can use this node to create an HTTP endpoint to send data to.
http request: You can use this node to create a request to the Flexible Vision API. Choose the method you want to use, then enter the endpoint URL.
You can use this node with a function node, for example, to specify what information you want to send back as a response.
http response: You can use this node to send a response such as processed data back to an endpoint created with the http in node.
Authorization
For authenticated endpoints, authorization is required.
How to Log In
On the API docs page, press the ‘Authorize’ button and enter: Bearer
To get your access token, use the following request: https://v1.cloud.flexiblevision.com/api/capture/auth/node_login
With the payload: {“username”: , “password”: “account_password”}
HOW TO GET STARTED QUICKLY
Resource Videos
To get started quickly with your AI system, follow the resource videos found in the Flexible Vision online video course. Open this course using the link below:
The Resources Videos screen will appear.
Follow these video lessons carefully; they will help you get started quickly.
The course consists of eight video lessons:
CREATING A PROJECT
This lesson covers how to:
Create an account
Create a new project
Add images from a camera
Upload images
A screenshot of a sign up form
Description automatically generated
TAGGING YOUR DATASET
This lesson covers how to:
Create new tags
Tag your image set
Use shortcuts
A hand touching a screen
Description automatically generated with medium confidence
TRAINING YOUR MODEL
This lesson covers how to:
Augment your training data set
Create your model
Test your model in the cloud
A screenshot of a computer
Description automatically generated
SETTING UP YOUR PROCESSOR
This lesson covers how to:
Connect the hardware power on
Make WiFi and server connection
Register your device
Sync the AI models from the cloud
A diagram of a video player
Description automatically generated
RUNNING YOUR MODEL
This lesson covers how to:
Quickly run a model on your processor using the preset feature
Create a mask
Configure and run a preset
A hand holding a black object
Description automatically generated
CAMERA SETUP AND CALIBRATION
This lesson covers how to:
Create an image setup
Remove lens distortion
Choose camera settings
Calibrate the camera to real-world coordinates
A hand on a chessboard
Description automatically generated
USING NODE CREATOR, CREATING FIRST FLOW
This lesson covers how to:
Understand the workspace
Create a flow to run a preset
Handle basics of JSON formatted objects
Add controls to the dashboard.
A screenshot of a computer
Description automatically generated
CREATING AND RUNNING YOUR FIRST PROGRAM
This lesson covers how to:
Create a program
Add post-process inspections
Sync and run
A screenshot of a computer
Description automatically generated
Other Helpful Videos
VIDEO 1: Log in to Enterprise Organizations
This lesson covers how to:
Log in and navigate your admin console
Update user assets
Enter and edit your company name and logo
Turn product share on or off
A screenshot of a login form
Description automatically generated
VIDEO 2: Syncing Your Vision Models
This lesson covers how to:
Start and name a new project
Add camera and images
Take snapshots from the training model
Capture and Process
A screenshot of a computer
Description automatically generated
VIDEO 3: Creating Your First Training
This lesson covers how to:
Name and add a new project
Take snapshots
Define excellent and bad snapshots
Capture and process.
A screenshot of a computer
Description automatically generated
VIDEO 4: Making a Robust Training
This lesson covers how to:
Set up a vision project about recognizing two types of screws.
A close-up of screws
Description automatically generated
VIDEO 5: Setting up your Hardware
This lesson covers how to:
Attach a camera and power on the Onprem Vision Cell
Connect to the Vision Cell WiFi
Enter the Onprem IP address
Connect the Onpem and device to your WiFi
Select Vision Cell IP link, confirm, and log in
Name your workstation and download models
Test the Onprem, load a barcode & OCR model, and take a snapshot
Close-up of a camera setup guide
Description automatically generated
VIDEO 6: OCR Training
This lesson covers how to:
Detect text using OCR on a PCB
Train a PCB OCR model in the cloud to check OCR text.
A close up of a green circuit board
Description automatically generated
Remote Control with TeamViewer
How to log in via TeamViewer to control the local processor:
On the left side, choose ‘Remote Control’.
Enter the Partner ID of the system you want to control.
Click ‘Connect’.
When prompted, enter the password of the system.
A screenshot of a computer
Description automatically generated
Resource Files
Instructions to Download Calibration Grid Files:
Go to https://www.flexiblevision.com/resources/
Under the Resource Files section, use the multicolor button to choose and download the relevant PDF file.
Go to your Downloads folder to open the file.