How do Vision Inspection Systems Work for Automated Quality Assurance
Welcome to Flexible Vision.
We are a new type of machine vision company focused on changing the way machine vision is deployed on factory floors.
Finding subtle defects and managing archived images is now simple thanks to our intuitive to use software and hardware solution
Flexible Vision is the first machine vision application to merge the power of the cloud and the reliability and security of the edge.
Do on the go inspection and image archiving on the app
Or use the edge system to integrate with existing automation and add capability to your factory lines.
Flexible Vision opens the door to new and exciting defect detection on organic and dynamic product
the Flexible Vision application is simple to use and creating a inspection model is simple
Quality Engineers now have the power to provide solutions to the factory floor
Contact us to see how Flexible Vision can solve your application
Welcome to Lesson 1.
creating a project .
Building an AI vision model is broken out into 4 steps.
In this lesson we will focus on creating the project and adding the images.
If you don’t already have an account, head back over to the Flexible Vision home page to sign up.
Click the login button in the upper right of the page.
Then Click signup and fill in your login and company information.
Click the launch button and you will be forwarded to your Flexible Vision app dashboard
Click the plus Icon to add a new project.
Type the name you would like to associate to this project and click Add.
you will now see your new project card appear.
the grey circle at the bottom right of the project card indicates completion progress.
lets click on it to move to our first step, adding the images.
There are a few ways to add images to your project.
You can use a U.S.Bee webcam, upload from a file manager, or upload directly from a Flexible Vision on-prem processor.
in this lesson we will focus on the the first two methods.
Select the project and camera from the dropdown menus.
Once you have your product within the cameras view, click the snap button to capture the image.
repeat this process a few times to get a good variety of images.
Usually 5 to 10 image samples is a good starting point.
Now lets add a few images from our file directory.
Simply click the upload button to bring up the drag and drop upload box.
Our images are now added to our project.
Remember, you can always come back and add more images to your training set.
to advance to start tagging the images click the orange tag images button
In summary, we have now created an account, a project, added images from our local webcam, and added images via image upload.
This concludes lesson 1.
Please join me in lesson 2 to start tagging these images.
Welcome to Lesson 2.
Tagging your dataset.
In this lesson we will focus on step two, tagging our image dataset.
To create your tags of items you are interested in detecting, click the add tag button on the lower right quadrant of the screen.
You can add as many tag names as you wish.
In this case, we will add just two.
To start tagging, simply click and drag the bounding box over the feature or item of interest.
it is important to tag all features or items visible within the image.
Having missing or incorrectly tagged images can degrade the quality of your detections.
To toggle between tag types, either press 1 thru the top of the image.
Once all your images are tagged, its a good idea to double check your work.
There are a few shortcuts built into the tagging app to make your work a bit easier.
These shortcuts include saving, adding a tag, duplicating a tag, zooming and more.
Right clicking anywhere on the image will bring up a list of tools along with a shortcut menu.
In summary, we have created multiple tag types, tagged our images, and reviewed a few simple shortcuts.
Please join me in lesson 3 to kickoff the model for training
Welcome to Lesson 3.
In this lesson we will focus on step 3 and 4, creating the model and testing it in the cloud.
To start the training process, click the orange button labeled run training
This will bring up a list of available options you can cater to your specific application.
The first option is choosing high accuracy verse high speed.
high accuracy to good for most applications.
optionally higher speed is available for applications where objects and defects are larger and more obvious.
The lower settings on this menu are used for augmenting your data set to make it more robust and less susceptible to lighting conditions and camera pose.
toggle these settings as needed.
The resolution dropdown sets the resolution of the image used for training and running of the model.
for example, setting a resolution to 768 pixels will resize your dataset and make your model run faster but with less resolution.
so if your defects are small, its best to leave this resolution to the native value of your image sensor.
Click create your model to save these settings and start the training.
Ensure you have tokens in your account.
Tokens can be purchased though our webstore or through your organization administrator.
Head back over to your dashboard to see the progress of your model.
A typical training takes about 20 to 30 minutes to complete.
Once complete, the indicator will go away and your completion circle will now be full and we are ready to test the model’s performance in the cloud.
To test your model, click the the snap and find menu item on the left navigation
From the dropdowns, select your project and camera.
move your item within the camera’s view and click snap.
within a few seconds you will see your models results on the film strip at the bottom of the page.
click on the image to enlarge.
Review your image and ensure it meets your performance requirements.
optionally you can use the drag and drop function if a camera is not applicable to your application.
its always a good idea to run this test on a variety of parts or poses.
This completes this lesson.
In summary, we have augmented our data set, created our AI model, and tested our models results in the cloud.
Please join me in the next lesson where we will setup our on premises processor
Welcome to Lesson 4.
In this lesson we will focus on Setting up your hardware and linking your processor to your account
In this first section we will focus on connecting your Camera, Processor and monitor.
Before we start, make sure to Unbox the hardware and securely mount the equipment.
Start by connecting the touchscreen monitor USB and Display Port to the processor unit.
The Camera can be connected to the dedicated P O E port or USB depending on your camera model.
The Processor is also equipped with 2 non P O E ports Lan 1 for connecting to the factory Lan and Lan2 for Machine to machine communication.
Finally connect the 3 24 volt power supplies as shown. Once these connections are in place.
connect the system to the power outlet and the system will begin its power up
In this second section we will focus on getting your processor connected to your wifi network, registering your device, and lastly, syncing your models.
start by connecting your laptop or smart device to the on-premmessis processor’s hotspot.
Each processor has a unique name that starts with Visioncell.
The hotspot password is
password.
Once connected navigate over to your web browser and
type one nine two
dot one six eight
dot twelve
dot one and press enter.
Click on any of the menu taps on the left to bring up the wifi connection prompt.
select the wifi network you would like this device to be on and the corresponding password.
click the update network settings button and wait for the assigned I.P to be displayed.
the processor is now connected and accessible on our factory wifi network.
we can now move our laptop back over to our factory wifi
Now lets click on the newly assigned i.p address to gain access to register our device.
Click login and if prompted, enable popups and refresh the page.
Click the confirm button and then sign on using your flexible vision account that you created in lesson one.
you will now be re-directed back to your on-premmessis dashboard.
again, click on any of the left menu items to bring up the device name prompt.
this is the name that will show up in the meta data of your predictions.
the status of this device name will also be available in your cloud portal.
Lets now select the models we would like to pull down from the cloud and run on this device.
click sync and wait for the models to be downloaded and deployed.
In this lesson, we have connected our system components, registered device, and synced our models from the cloud.
Join us in the next lesson where we will run these models.
Welcome to Lesson 5.
In this lesson we will focus on running your downloaded model, creating a mask and running a model pre-set.
The preset feature built into the application, will allow you to run a complex program with a simple input from a remote device.
In this lesson we will cover running your model with an without a preset as well as configuring some of the options of a preset.
Once your models have been fully downloaded, head over to the snap and find menu tab.
Go a head and start selecting the camera you would like to use, along with the model name and the version of your model
within a moment, you will now see the live feed of your camera finding your objects or defects.
This is a great place to start to verify the model is performing as expected.
Its a good idea to move the part around or give various samples at this point
Now lets create a new mask to block out areas of the image.
click on the masking menu item on the left.
Again, lets select the camera we would like to use to draw our mask.
there are several tools available for drawing the mask.
use the ones that fit your application best.
when using the polygon tool, you will notice a green dot which is your start and finish point.
to save your mask, click the save icon and give it a unique name.
Now lets move on to creating a preset.
click the eye O Presets on the left menu
Start by selecting the input trigger for this preset,
then continue selecting what model
model version
camera
and mask.
you can also set the minimum confidence score you would like to display on screen as well as toggling
whether you would like to archive the inspections to the cloud.
once fully configured, click save
Now lets head back over to the snap and find page to run our preset with a single click.
these presets can also be triggered by a P.L.C.
robot, or a digital input directly to the processor.
This concludes the lesson 5 training.
In this lesson, we tested our model in the snap and find feature, created a mask, and ran a preset.
Join us in the next lesson where we will run through the camera setup settings.
Welcome to Lesson 6.
In this lesson we will focus on Setting up your camera image and calibrating the camera to real world coordinates.
In this first section we will focus on finding your Camera, and setting up the image.
Lets head over to the Camera Details Tab on the left.
Here we will see a list of connected cameras.
if you don’t see your newly connected camera listed, click the refresh cameras button.
within a few seconds you will see your newly connected camera appear on the list.
Now we can review the camera settings tab.
From the camera dropdown list, lets select the camera we would like to adjust.
On the right side of the page you will see a few common settings.
including, changing the camera name, changing the camera exposure time, and sensor gain.
Adjusting the sensor exposure time will made your image brighter or darker.
the gain setting is useful to amplify the image colors but it can also cause more image noise.
minimal gain is recommended for most applications.
To increase processing speed and remove unwanted area out of an image, we can use the crop region of interest tool.
Simply, left click and drag a box over the area of interest.
click the blue set region button and within a moment the camera feed will only render the selected area.
to undo this feature, click the clear region button.
Within a few moments we will see the camera feed reverted back to the original full view.
Under the advanced tab, you can access a plethera of camera settings, including setting auto exposure,
tuning color channels, and much more
In this next section we will cover removing lens distortion and calibrating your camera to real world coordinates.
Click on the Calibration menu tab.
Select your camera from the dropdown and place your checkerboard calibration grid under the camera’s field of view.
focus your camera to give sharp corners between the squares.
Enter the width of the square, in this case our squares are 20 millimeters.
To calibrate the camera pixels to milimeters and remove lens distortion, we will take a series of 5 images.
order does not matter, but we will want to move the grid to all four corners and one in the middle.
remember to keep the entire grid within the view of the camera.
After taking the 5th image you will notice the image appears much more flat and true.
The calibration has also advanced to the second step of setting our X Y and Rotation reference.
This second step is mostly used when sending coordinates to a robot for pick and place guidance.
Lets transition to move the large QR code under the camera.
Right away you will see the camera track the QR code.
after clicking the snap button, make sure to keep the QR code in the same position while teaching the robot origin X Y positions.
The very center cross hair is the origin position and the X and Y directions are typical to the right
hand rule in robot frames.
In this lesson, we have walked through viewing the available image setup tools and calibrated our camera to be used for robot guidance.
Join us in the next lesson where we will explore creating a flow program.
Welcome to Lesson 7.
Creating your first Flow.
In this lesson we will focus on Understanding the Workspace,
Creating a Flow to Run a Preset, Basics of JSON Formatted Objects, and, Adding Controls to the Dashboard.
To start creating our first flow, lets head over to the node creator tab on the left menu on our on premis processor
The editor window consists of four components:
The header at the top, containing the deploy button and main menu.
The palette on the left consists of the available nodes to use.
The main workspace in the middle where flows are created, and The sidebar on the right.
Start by clicking and dragging the blue inject node from our pallet onto our workspace.
lets also do this with the green debug node.
Now click and drag a line between the two nodes.
This simple flow will send a time stamp number through the wire out to a debug.
lets open the console on the right menu.
You will notice the blue deploy button is now blue.
in order for the flow to run in real time, we will need to click and deploy the flow.
Once deployed, we can click the blue inject node and see the results appear in the debug window.
lets now modify this flow to run a preset on our processor.
if you scroll down on the pallet, you will see a set of Flexible vision nodes.
click and drag the preset node on top of the wire.
double click the newly added node and lets configure it.
this configuration will only need to be done once.
future uses of this node will use these newly added configuration settings.
The workstation name is a unique name that is added to the image metadata.
this name is useful when filtering data in the cloud to know the station where these results came from.
username is admin
password is f v on prem
and the i p address is 172
dot 17
dot 0
dot 1
click add,
then deploy.
we can now open the node to select the preset we would like to run.
in this case we will be using preset number 2.
lets click deploy one more time and test out the flow.
you can see the results appear in the console window.
lets review these results.
The data is displayed in J Son format, a text-based format for representing structured data.
this format allows data to be nested in a tree structure and allows users to pull as much or as little of the data that they are looking for.
The results consist of a variety of information, including camera settings used, image size, model name and version,
processing times, quantity of items found, pass fail details, locations of each item, the image in a base64 format and much more.
In this demonstration we will want to pull out the name of the item found.
to do this we will click on the small arrow next to the item of interest.
this will copy the path of the variable.
to output just the variable of interest, i will paste the path into a new debug node.
click deploy and run the flow.
now we can see two results.
the first is the original message, and the second is the new message with just the word connector.
clicking the flag on the side of the debug node will silence its messages.
now i will only see the debug of interest.
Now that we have a few basics under our belt, lets start customizing our dashboard.
To Start, I want to move our customizable area up to the very top of my dashboard so it is easily viewable to the operator.
to do this, just drag and drop this item onto the predefined boxes on the page.
now lets head back over to node creator to add some tools to this area.
scroll down to the bottom of the pallet and you will see a large list of dash board nodes.
Lets add a button and a text box
I’m also going to add a new node type called a “change node”
this node will allow me to pull out just the variable name, instead of the entire object.
I will overwrite the payload with just the name of the item found.
after stringing these nodes together, i will need to customize he button and text box information.
double click the node and give your button a unique name.
in this menu, you can change the size, text color, box color, location on the page and more.
i will keep the settings default for this demo lets do the same for the text box as well.
with this modified flow, i will expect an operator to click the button on the dashboard and see the results of the item the camera found.
lets clean up some unneeded nodes and test it out.
we can now see our dashboard has our new button and text box showing.
after clicking our new button, we can see the camera took an image
and is displaying its results within the newly added text box.
This concludes our node creator training module
Thanks for following along.
Please join me in the next lesson where we will explore and create post inspection programs.
Welcome to Lesson 8.
Creating your first Program.
Programs are an easy way to run various inspections along side the item detection.
using this product feature is beneficial when needing to send positional information to a robot, reading
a bar code, counting quantity of an item, or determining the surface area of a defect,
In this lesson we will focus on Understanding the program structure,
adding post process inspections to a newly created program, and syncing and running our program on a processor.
Lets login to our cloud portal and head over to the programs tab on the left menu
Here we will see a list of all our programs.
To create a new program click on the Plus icon to add a new program.
lets give it a unique name so its easy to reference.
next we will need to select a project and model version we want to add this program to.
click the add inspection tool button.
our environment allows for you to go two levels deep.
for example if you were trying to read the date on a coin, you would first find the coin, then find the
date on the coin.
for our demo we will look within the entire field of view of the camera and find a connector.
lets now add some inspection tools.
the Quantity tool will count the number of detections above the specified score it found of the connector.
the orientation tool will allow you to upload a reference image of your item and during runtime, the system will output the x y and rotation of the item.
this is typically used for robot guidance.
lets upload a calibrated image from our processor and crop out a single instance of the item we are expecting to find.
the isolate tool is extremely good at removing noisy backgrounds and will highlight just the item of interest.
This tool will also be enabled during runtime if enabled here.
we now need to specify our origin point.
This is a relative point that will be sent to the robot during runtime.
The area tool will run automatically and does not need any special configuration.
we can upload an image just to test that the tool is detecting our item.
the pass fail tool will highlight your images as red or green durring runtime and you can also enable logging of just failed images to the cloud.
in this demo we will pass the result if the connector quantity is equal to 1.
if the system detects exactly one it will be a pass, anything else will be a fail.
lets click save and sync our models and programs to our processor.
on our processor, lets go to our presets tab.
go through and quickly add a new preset for this application.
since this is a new program, we will need to sync it to this device, click on the pull programs button.
you can also sync your models through the settings tab and skip this step.
select the name of the program we created.
there are additional image archiving preferences available here as well.
lets click save and now lets try it out.
from the snap and find window, select the new preset we configured.
the system is detecting the items as expected, click the snap button and verify the the program is outputting the expected results.
Zooming in we can see the red orientation tool is detecting the x y and rotation of the item along with outputting a fail result.
the fail result is due to the quantity of connectors not equal to exactly one like we configured earlier.
lets remove a few and confirm we get a pass result with just one in the field of view.
the results look good.
lets head back over to the dashboard to review the results we expect to see during runtime.
here we can see the result.
you can optionally hide the pass results by clicking the slider in the upper right of the widget section.
within the results you can see we are processing quite a bit of information including area of the item and x y and rotation in millimeters.
all of this information is available in the node creator flow and can be used in your custom application.
this information is also archived to the cloud for future reference
Thanks for following along.
Please join me in the next lesson where we will setup our camera with high speed strobing.
Congratulations on your new enterprise organization.
In this session we will cover how to login and navigate your admin console.
Lets start by heading over to the Flexible vision login page.
Click on the organization login button and type the organization name provided by your flexible vision representative.
Login with the single sign on listed or with a username and password if available.
If you have been assigned administrative rights to this organization, you will see a admin console menu item on the left.
Lets open up this menu item to go through some key features
To invite new members to our organization, click on the invite members button.
this will bring up a dialog box to enter the user’s name, email, and sign on method.
click the send button to send the invite request.
Let me add just one more user for this demonstration.
Have the invited user keep an eye out for an an email that looks something like this and click on the invite accept.
once the user has accepted the invite refresh this page and you will now see them under members.
by clicking on any of the area to the right of the user name, you will bring up a edit box that allows
tokens and storage will be decremented from the organization and added to the specified user.
Limiting storage values may prevent data from syncing to the cloud if the devices are linked to the user being edited.
limiting tokens will prevent the ai trainings from being created
along with the usage of the cloud snap and find feature.
lower down on the page you can edit your company name
logo
color theme
and enable or disable project sharing.
to change your company logo, find your logo online, then copy and paste the image address. to change your theme colors,
click on the colored buttons and select your color choices.
don’t forget to click the update button to save your theme.
If you have any questions about any of these features, please reach out to our team for more information.
Thanks for watching.
In this quick tip we will sync your vision models from the cloud to your local processor.
Lets start by navigating over to the settings page of your local processor.
Click on the sync dropdown then click sync models.
within a moment a list of your available models will be displayed.
select the checkboxes of all the models you would like to pull down to this device, then click sync
note that the latest models will always be at the top of the list.
the version number is a timestamp and the greater the timestamp, the newer the model is.
The models are now being synced.
you can check the status of the download by clicking on the bell in the top right of the screen.
download times vary depending on your internet speed and number of models being synced.
A typical download takes about a minute.
once the bell is no longer visible, the models are ready to run.
If you are using the preset feature, make sure to update your presets to use your latest downloaded
version by navigating to the presets menu item and selecting the new version from the dropdown.
Then click save.
If you have any questions about any of these features, please reach out to our team for more information.
Thanks for watching.
So today we’re diving into the world of machine vision. Very cool. You it is pretty cool. And you’ve sent over some pretty awesome articles from Flexible Vision. I’m already like just blown away by, you know, how much goes into picking the right camera and lens and lighting setup. It’s a lot more than just point and shoot, right? Right. It’s not just your iPhone. Yeah, no. So how about we just jump right in? Let’s do it. OK. So, you know, when you think about machine vision, we’re essentially
giving a machine the power to see, but in a very specific way. Right, it’s not like human vision where we’re trying to perceive the world and all of its beauty. It’s really about capturing the information that’s needed for a very specific task. Like, is this product effective or where is this object located so we can grab it? Okay, so it’s not about taking pretty pictures. No, It’s about capturing the right information. Right. So let’s break down the essential components. Okay. I’ve got cameras, lenses, and lighting.
What should we tackle first? Well, I think the heart of any machine vision system is the camera. specifically the image sensor. That’s what converts the light into electrical signals that a computer can understand. So there are two main types of image sensor, CCD and CMOS. CMOS sensors are becoming increasingly popular because they’re less expensive and they’re faster at processing images. OK.
which makes them really good for high speed applications. So if I’m on like a production line trying to inspect products as they’re whizzing by CMOS is probably my best bet. You got it. OK, cool. So now we’ve got to decide, do we want color or monochrome? Right. I mean, I would think color provides more information. You’d think so, right? But actually, monochrome sensors are often preferred in industrial settings. Really? Yeah, because they’re much more sensitive to light.
which can be really important in factory environments. Okay. And they can operate at much faster speeds. I see. So they can keep up with those, you know, those really fast moving production lines. So it’s not always about capturing every detail and color. It’s about getting the right information as quickly and clearly as possible. Exactly. Gotcha. Okay. So now I’ve also seen these terms, you know, rolling shutter and global shutter. yeah. What’s the difference and why should I care about that? Great question.
So a rolling shutter captures the image line by line, kind of like if you’re scanning a document. OK. And that can actually cause distortion if the object you’re trying to capture is moving quickly. I see. A global shutter, on the other hand, captures the entire image at once. OK. So there’s no distortion. Gotcha. So if I’m dealing with fast moving objects, a global shutter is essential. Yes, definitely. To get an accurate image. Exactly. OK, cool.
So beyond the type of sensor, you also need to consider sensor size, resolution, and frame rate. Yes, absolutely. So can you break those down for me? Sure. So sensor size is basically the physical dimensions of the sensor. And generally, larger sensors can gather more light, which can be a good thing if you’re in a low light situation. OK. Resolution refers to the number of pixels on the sensor. Right. So a higher resolution sensor will be able to capture more detail. OK. And frame rate.
is how many images the camera can capture per second. Okay. So if you need to capture really fast motion, you’re gonna want a camera with a high frame rate. So it’s a lot like Goldilocks and the three bears. Uh-huh. Not too big, not too small. Right. Just right. Exactly. From the application. Okay, cool. So now, even with like the perfect camera, we still need the right lens to focus that light onto the sensor. Right.
And I think that’s where it gets really fun. We had these manual lenses, autofocus lenses, and even liquid lenses. Liquid lenses, yeah, those are pretty cool. Yeah, what’s the advantage of a liquid lens? Well, they can change their shape really quickly. To maintain focus, which is really useful if the object you’re trying to image is moving around a lot, or if the distance to the object is constantly changing. Gotcha, so like on a fast moving production line, or if you have like a robot arm, that’s picking up and placing objects at different depths. Exactly.
A liquid lens could just keep everything in focus. Yeah, it’s basically instantaneous focusing. Wow. Okay, so we’ve got our camera, we’ve got our lens. Now you’ve mentioned that lighting is even more important than the camera itself. You know, it really is. Why is that? Well, think about it this way. If you try to take a picture with your phone in a dimly lit room, it’s going to be all grainy and blurry, and you won’t be able to see any detail. The same principle applies to machine vision.
know, lighting is all about creating contrast and highlighting the features that you want the computer to see. Okay, so it’s not just about brightness. It’s about like strategically using the light to get the best possible image for analysis. Okay, so what are some of the lighting techniques that are commonly used? there are a ton, but some of the most common ones are backlighting, dark field, and diffuse lighting.
Okay. So with backlighting, you’re basically shining the light from behind the object. Okay. Which creates a silhouette that can be really useful for measuring dimensions or detecting the presence or absence of a feature. Dark field lighting uses low angle illumination to highlight surface imperfections or edges. And then diffuse lighting provides kind of even illumination over the entire object. Okay. So it minimizes shadows and glare.
So it’s a lot like a photography studio. You’ve got all these different ways to manipulate the light to get the desired effect. Can you give me an example of how these techniques might be used in a real world application? Sure. Let’s say you’re inspecting a bottle on a production line and you want to check the fill level. Backlighting would be a great choice there because it creates a clear outline of the liquid, which makes it easy for the system to measure how full the bottle is. Gotcha. OK. On the other hand,
If you want to inspect the surface of the bottle for scratches or defects, diffuse lighting might be a better choice because it’ll minimize the glare and highlight those imperfections. So choosing the right lighting technique is all about understanding what features you want to emphasize and how light interacts with different materials. Exactly. OK, this is fascinating stuff. Before we move on, are there any other key takeaways about cameras, lenses, and lighting?
that our listeners should keep in mind. Well, I think the most important thing to remember is that there’s no one size fits all solution. know, the best choices for your camera lens and lighting are always going to depend on the specific application. But understanding the basics, like we’ve talked about today, will give you a framework for making informed decisions. Yeah, absolutely. It’s all about choosing the right tools for the job. OK, so even with the perfect setup, I imagine there are still some challenges to overcome. there are always challenges.
Right. I’ve heard of something called lens distortion. Can you explain what that is and why it matters? Sure. You know how when you look through a fisheye lens, straight lines appear curved? Yeah. Well, similar distortions can occur in machine vision lenses, especially at the edges of the image. And that can really throw off measurements and make it difficult for the system to accurately interpret what it’s seeing. So a square might not actually look
perfectly square to the camera, which seems like a big problem. It can be huge problem if you’re trying to measure things precisely. OK, so how do we fix that? That’s where checkerboard calibration comes in. OK. So basically, you take pictures of a precisely patterned checkerboard with your camera and lens setup. OK. And then you use software to analyze those images and identify and correct for any distortion that’s present in the lens. So it’s like teaching the system to see straight.
Exactly. By understanding the distortion pattern, the software can then unwarp the image and essentially correct for the lens’s imperfections. That’s awesome. It’s like magic, but it’s really just clever engineering. That’s right. So speaking of clever engineering, another topic that comes up a lot in machine vision is the trade-off between resolution and speed. yeah, that’s a classic challenge. Can you break that down for us? Sure.
Higher resolution cameras capture more detail, but they also generate larger images which take longer to process. Lower resolution cameras sacrifice some detail, but they can operate at much faster speeds. So it’s all about finding the right balance for your application. So how do you know which way to lean? Well, it really depends on your specific needs. You know, if you’re inspecting for tiny defects,
on a fast moving production line, speed might be more important than resolution. Because you need to keep up with the flow of products. But if you’re analyzing detailed images in a more controlled environment, resolution might be the more critical factor. It sounds like there’s a lot to consider when setting up a machine vision system. There is. Are there any general tips or best practices you can share that might help our listeners make informed decisions? Absolutely.
One of the best things you can do is clearly define your goals and requirements upfront. Like what are you actually trying to achieve with this machine vision system? What level of precision do you need? How fast does it need to operate? So start with the why. Exactly. Before you worry about the how. Exactly. Once you have a clear understanding of your needs, then you can start thinking about the specific components and techniques that’ll help you meet those requirements. don’t be afraid to experiment. know, machine vision is a field where
hands-on experience can be invaluable. totally agree with that. OK. So let’s put all this knowledge to the test. OK. Let’s say our listener is tasked with setting up a machine vision system to inspect a batch of shiny curved metal parts for tiny surface defects. What are some key considerations they should keep in mind? All right. So first off, they’re going to need to think about how to minimize glare from those shiny surfaces. Right. Because those are going to reflect light like crazy. Exactly.
So diffuse lighting would probably be the best choice here. Okay. Provides even illumination and reduces those harsh reflections. Gotcha. So we don’t want the camera getting tricked by all that shine. Exactly. Okay. What about the camera itself? Color or monochrome? I would go with monochrome in this case. Since we’re not really concerned with color variations and monochrome sensors are more sensitive to light. Right. Which is helpful for detecting small details. Okay. Plus they offer faster processing speed.
Okay, so we’re prioritizing sensitivity and speed here. What about the lens? Any recommendations there? Yeah, given the curved surface of the part’s lens distortion is going to be a key consideration. Okay. So you want a lens with minimal distortion, especially towards the edges of the image. Right. That’s crucial for accurate analysis. Okay. And of course, checkerboard calibration. absolutely. To correct for any imperfections. Can’t forget about that. Gotcha. Okay, so now the age-old question.
Resolution or speed, which one wins in this scenario? Well, since we’re looking for tiny defects, resolution is going to be the deciding factor here. OK. They’ll want a camera with enough resolution to clearly capture those minute flaws, which might mean sacrificing some speed. Right. But in a controlled inspection environment like this, that’s probably a trade-off worth considering. Any final words of wisdom for our listeners as they embark on their machine vision journey?
Just remember the best machine vision system is the one that’s tailored to your specific needs. Don’t be afraid to ask questions, experiment, and learn from your experiences. All so before we wrap up, I want to circle back to something we talked about earlier. You mentioned that monochrome sensors are more sensitive to light, which makes them.
Great for detecting small details. Can you explain exactly how that works? Sure. So basically monochrome sensors don’t have color filters, which means that they can capture more of the available light. I see. And that extra light translates into a stronger signal, which makes it easier to distinguish between small variations in intensity. So it’s all about maximizing the amount of light that reaches the sensor. Exactly. OK, that makes sense. So to recap, we’ve talked about
the importance of choosing the right camera lens and lighting for your specific application. Right. We’ve discussed the challenges of lens distortion and how to overcome them with checkerboard calibration. Uh-huh. And we’ve explored the trade-offs between resolution and speed. Yep. It’s been a really comprehensive overview. It has. And I think it’s given our listeners a solid foundation for understanding the fundamentals of machine vision. Absolutely. So until next time, folks, keep those cameras rolling, those insights flowing.
. (00:00)
Today, we’re diving into lenses, you those things you use every day but probably don’t think too much about. You sent over some fascinating material on optical aberrations, MTF curves, and lens selection. And I have to admit, I was a little intimidated at first, but some of this stuff is surprisingly fascinating. It is, and it’s way more relevant than people realize. mean, lenses are the backbone of everything from photography to…
machine vision, medical imaging, you name it, even astronomy. We’re talking about the tools that let us see and capture the world from the smallest details to the vastness of space. Okay, I’m ready to get my lens shape mind blown. So one of the first things that caught my eye was this concept of optical aberrations. It sounds a bit like a disease or something. What exactly are we talking about here? Not a disease, no, but it can definitely make your images a bit, well, sick if they’re not corrected. Basically, you see an ideal lens would
perfectly focus all incoming light onto a single point, giving you a crisp, clear image. Right, makes sense. But in reality, every lens has these imperfections that can distort or blur that light, causing what we call aberrations. So instead of a perfect point of light, you get something a little dot off. Exactly. And the thing is, there are different types of aberrations, each with its own quirky way of messing with your image. Let’s take spherical aberration, for example.
It causes light rays entering the lens at different points to focus at slightly different distances. OK. The result, a kind of haze or softness across the image, especially noticeable at wider apertures. interesting. So that soft, dreamy look some photographers love in their portraits, that could be due to a lens imperfection. It can be. In fact, older lenses were actually known for having a bit of uncorrected spherical aberration, which, by the way, contributed to that classic vintage look.
Modern lenses, of course, do a much better job of minimizing these aberrations, but they can never be completely eliminated. So like there’s always some degree of imperfection. It kind of makes you wonder how much that impacts what we see even in everyday life, Absolutely. Now, besides spherical aberration, there are other what we call usual suspects, things like field curvature and astigmatism. OK. I’ve heard of astigmatism from eye exams. Is it the same idea with lenses? It’s similar in concept.
With astigmatism in lenses, light rays entering the lens at different angles focus in different points, causing parts of the image to look stretched or blurred in different directions. Imagine taking a picture of a grid. You know, with astigmatism, some lines might be sharp while others are fuzzy. OK, I see. Now with field curvature, basically different parts of the image come into focus at different distances. So you might get the center sharp, but the edges are blurry or vice versa. Imagine trying to take a landscape photo where everything is
perfectly in focus. That would be super frustrating. Tell me about it. Okay, so I’m starting to see how these aberrations can really mess with your photos. But how can you tell how bad a lens is in terms of these imperfections? Is it just trial and error or is there like a more scientific way? There is and that’s where the MTF curve comes in. Think of it as a lenses report card. It tells you how well that lens performs in terms of resolution and contrast. Revolution I get. The more detailed the better.
Contrast how does that play into image quality? So contrast is all about the difference between light and dark areas in an image. High contrast means those differences are more pronounced giving you a sharper more, you know punchy image. Low contrast on the other hand makes the image look flat and muddy. So a good MTF curve means a lens that can capture both fine details A and D
render them with strong contrast. Exactly. And different parts of the MTF curve actually tell us a lot about how specific aberrations are affecting the lens’s performance. For example, that drop-off you might see at higher frequencies on the curve. That could be our friend spherical aberration again, struggling to keep those fine details sharp. So you can actually see the fingerprints of these aberrations on the MTF curve. That’s pretty neat. It is. And if you compare the MTF curves of two different lenses,
you can really see how those differences translate into real world image quality. We actually have an example from the materials you sent over. Two lenses, two very different MTF curves, and the resulting images are, well, drastically different. One is tack sharp with tons of detail. The other is soft and blurry, lacking that crispness. wow. Yeah, that’s pretty convincing. knowing about MTF curves is obviously helpful, but how do you actually use this information to choose the right lens in the first place?
There’s so many options out there, it’s overwhelming. It can be. And this is where understanding your needs and really digging a little deeper into lens types and their strengths and weaknesses comes in. And speaking of which, there’s a whole fascinating world of specialized lenses out there beyond the ones we typically use for everyday photography. Now that sounds like a great place to pick up. Let’s dive into the diverse world of lenses and how they’re shaping our world, often in ways we don’t even realize. I’m ready when you are. OK, so we’re back.
and ready to dive into the world of specialized lenses. I have to admit, I was blown away by this idea that there are lenses specifically designed for, like, inspecting tiny circuit boards or capturing images from space. It’s not just about taking pretty pictures anymore. Yeah. Lenses have become incredibly specialized tools in all sorts of fields. Think about machine vision, for example. It’s all about using cameras and computers.
to automate tasks that used to require human vision. So like those robots that assemble cars or the systems that check for defects in products. Exactly. And for those applications, you need lenses that can capture images with incredible accuracy and detail. You mentioned inspecting circuit boards earlier. Well, for that, you might use something called a telecentric lens. Telecentric lens. I’ve definitely never seen that setting on my camera. What makes them so special? Well, unlike regular lenses where the magnification changes depending on the object’s distance,
Telecentric lenses maintain a constant magnification even if the object is slightly closer or farther from the lens. So no matter where the object is, it always appears the same size in the image. Exactly. That sounds incredibly useful for precision measurements. It is. And because they minimize distortion, telecentric lenses are perfect for applications where you need to make precise measurements, like inspecting those tiny components on a circuit board or ensuring that parts are being manufactured to exact specifications.
Okay, telecentric lenses, check. What other lens superheroes are out there saving the day in machine vision? Well, let’s say you’re working with a continuous moving object, like a roll of fabric or paper. In that case, you might use a line scan lens. A line scan lens, okay. What’s the advantage of that over a traditional lens? So instead of capturing the entire image at once, a line scan lens captures one line at a time as the object moves past, kind of like scanning a document.
These individual lines are then stitched together to create a high resolution image of the entire object. So you get this super detailed, continuous image. Perfect for spotting any defects or inconsistencies in the material. That’s brilliant. It is. And it really highlights how specific lens designs can really revolutionize certain industries. Now moving beyond machine vision,
How do we choose the right lens for a given application? Because with all this variety, it can be a little overwhelming, right? Definitely overwhelming. It’s like walking into a giant lens store and having no idea where to start. Yeah. Maybe we can break down the decision making process a bit. Absolutely. And I think a good place to start is by looking at some real world examples. Let’s say you’re a photographer. You’re planning a trip to the Grand Canyon. What kind of lens comes to mind? I’m picturing those epic wide angle shots capturing the vastness of the canyon.
So something that could fit a lot into the frame. You got it. That’s where a wide angle lens comes in. It has a short focal length, giving you a wider field of view. Perfect for those sweeping landscape shots. wait. I’ve heard that wide angle lenses can make things look distorted, especially at the edges. Like straight lines start to curve. That’s true. It’s that lens distortion we talked about earlier. Some wide angle lenses handle it better than others.
If you’re concerned about accuracy, look for lenses specifically designed for architectural photography or landscapes. They tend to have less distortion. OK, good to know. Now let’s say I want to switch gears and get some close-ups of those wildflowers blooming in the canyon. What kind of lens magic do I need for that? for that you’d want a macro lens. These are designed for extreme close-ups, allowing you to capture tiny details invisible to the naked eye. Think of those intricate patterns on a butterfly’s wing.
or the delicate veins on a flower petal. So I could basically turn my camera into a microscope. Pretty much. But with macro photography, you need a lens that can resolve fine details at close distances while providing enough depth of field to keep the subject in focus. Because even the slightest movement can throw the entire image out of focus. Makes sense. It’s like a whole different level of precision compared to regular photography. Exactly.
And macro lenses are a great example of how understanding lens design and performance is crucial. You need to consider factors like working distance, magnification, and depth of field to get those stunning close-ups. OK, I’m starting to see how the type of photography you’re doing really dictates the lens you need. It’s not just about buying the most expensive lens. It’s about finding the right tool for the job. But how do you even begin to navigate all these options? There are so many lenses out there. It can be daunting.
And that’s where resources like Flexible Vision come in. They have a wealth of information on their website, including detailed lens specifications, application notes, and even online tools to help you choose the right lens for your needs. So it’s a good idea to do your research and maybe even consult with an expert. Absolutely. They can help you understand your specific needs, whether it’s focal length, aperture, image circle size, or any of those other parameters we’ve discussed.
This has been incredibly eye-opening. I feel like I’ve only just scratched the surface of the world of lenses. But we’ve covered a lot of ground, from aberrations and MTF curves to specialized lenses for all sorts of applications. What are some key takeaways you want our listeners to remember? Well, first and foremost, I hope they’ve gained a deeper appreciation for the complexity and importance of lenses. They’re not just pieces of glass. You know, they’re precision instruments that shape the way we see and interact with the world.
And I encourage everyone, read more about optics, experiment with different lenses, and most importantly, keep asking those questions. Because the more we understand about how we see the world, the better equipped we are in navigating. I totally agree. A big thank you to you for taking the time to share your expertise with us today. It’s been a pleasure. It’s been my pleasure, really. And to our listeners, thank you so much for joining us We hope you’ve enjoyed this journey into the world of lenses.
Until next time, keep exploring and keep those lenses clean.
Get a quote in less than 24 hours