I’m an enormous fan of interactive visualizations. As a pc imaginative and prescient engineer, I deal virtually each day with picture processing associated duties and most of the time I’m iterating on an issue the place I would like visible suggestions to make selections. Let’s consider a quite simple picture processing pipeline with a single step that has some parameters to rework a picture:

How are you aware which parameters to regulate? Does the pipeline even work as anticipated? With out visualizing your output, you may miss out on some key insights and make sub optimum selections.
Generally merely displaying the output picture and/or some calculated metrics will be sufficient to iterate on the parameters. However I’ve discovered myself in lots of conditions the place a device can be immensely useful to iterate shortly and interactively on my pipeline. So on this article I’ll present you how one can work with easy built-in interactive components from OpenCV
in addition to how one can construct extra trendy person interfaces for Pc Imaginative and prescient initiatives utilizing customtkinter
.
Conditions
If you wish to comply with alongside, I like to recommend you to arrange your native surroundings with uv and set up the next packages:
uv add numpy opencv-Python pillow customtkinter
Aim
Earlier than we dive into the code of the challenge, let’s shortly define what we need to construct. The applying ought to use the webcam feed and permit the person to pick out several types of filters that can be utilized to the stream. The processed picture ought to be proven in real-time within the window. A tough sketch of a possible UI would look as follows:

OpenCV – GUI
Let’s begin with a easy loop that fetches frames out of your webcam and shows them in an OpenCV window.
import cv2
cap = cv2.VideoCapture(0)
whereas True:
ret, body = cap.learn()
if not ret:
break
cv2.imshow("Video Feed", body)
key = cv2.waitKey(1) & 0xFF
if key == ord('q'):
break
cap.launch()
cv2.destroyAllWindows()
Keyboard Enter
The only method so as to add interactivity right here, is by including keyboard inputs. For instance, we will cycle by way of completely different filters with the quantity keys.
...
filter_type = "regular"
whereas True:
...
if filter_type == "grayscale":
body = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
elif filter_type == "regular":
go
...
if key == ord('1'):
filter_type = "regular"
if key == ord('2'):
filter_type = "grayscale"
...
Now you may change between the conventional picture and the grayscale model by urgent the quantity keys 1 and a pair of. Let’s additionally shortly add a caption to the picture so we will really see the identify of the filter we’re making use of.
Now we must be cautious right here: in the event you check out the form of the body after the filter, you’ll discover that the dimensionality of the body array has modified. Keep in mind that OpenCV picture arrays are ordered HWC (top, width, shade) with shade as BGR (inexperienced, blue, crimson), so the 640×480 picture from my webcam has form (480, 640, 3)
.
print(filter_type, body.form)
# regular (480, 640, 3)
# grayscale (480, 640)
Now as a result of the grayscale operation outputs a single channel picture, the colour dimension is dropped. If we now need to draw on prime of this picture, we both have to specify a single channel shade for the grayscale picture or we convert that picture again to the unique BGR format. The second choice is a bit cleaner as a result of we will unify the annotation of the picture.
if filter_type == "grayscale":
body = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
elif filter_type == "regular":
go
if len(body.form) == 2: # Convert grayscale to BGR
body = cv2.cvtColor(body, cv2.COLOR_GRAY2BGR)
Caption
I need to add a black border on the backside of the picture, on prime of which the identify of the filter can be proven. We are able to make use of the copyMakeBorder
perform to pad the picture with a border shade on the backside. Then we will add the textual content on prime of this border.
# Add a black border on the backside of the body
border_height = 50
border_color = (0, 0, 0)
body = cv2.copyMakeBorder(body, 0, border_height, 0, 0, cv2.BORDER_CONSTANT, worth=border_color)
# Present the filter identify
cv2.putText(
body,
filter_type,
(body.form[1] // 2 - 50, body.form[0] - border_height // 2 + 10),
cv2.FONT_HERSHEY_SIMPLEX,
1,
(255, 255, 255),
2,
cv2.LINE_AA,
)
That is how the output ought to look, and you may change between the conventional and grayscale mode and the frames can be captioned accordingly.

Sliders
Now as an alternative of utilizing the keyboard as enter technique, OpenCV gives a fundamental trackbar slider UI ingredient. The trackbar must be initialized at first of the script. We have to reference the identical window as we can be displaying our pictures in later, so I’ll create a variable for the identify of the window. Utilizing this identify, we will create the trackbar and let or not it’s a selector for the index within the listing of filters.
filter_types = ["normal", "grayscale"]
win_name = "Webcam Stream"
cv2.namedWindow(win_name)
tb_filter = "Filter"
# def createTrackbar(trackbarName: str, windowName: str, worth: int, depend: int, onChange: _typing.Callable[[int], None]) -> None: ...
cv2.createTrackbar(
tb_filter,
win_name,
0,
len(filter_types) - 1,
lambda _: None,
)
Discover how we use an empty lambda for the onChange
callback, we are going to fetch the worth manually within the loop. Every part else will keep the identical.
whereas True:
...
# Get the chosen filter kind
filter_id = cv2.getTrackbarPos(tb_filter, win_name)
filter_type = filter_types[filter_id]
...
And voilà, now we have a trackbar to pick out our filter.

Now we will additionally simply add extra filters simply by extending our listing and implementing every processing step.
filter_types = [
"normal",
"grayscale",
"blur",
"threshold",
"canny",
"sobel",
"laplacian",
]
...
if filter_type == "grayscale":
body = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
elif filter_type == "blur":
body = cv2.GaussianBlur(body, ksize=(15, 15), sigmaX=0)
elif filter_type == "threshold":
grey = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
_, thresholded_frame = cv2.threshold(grey, thresh=127, maxval=255, kind=cv2.THRESH_BINARY)
elif filter_type == "canny":
body = cv2.Canny(body, threshold1=100, threshold2=200)
elif filter_type == "sobel":
body = cv2.Sobel(body, ddepth=cv2.CV_64F, dx=1, dy=0, ksize=5)
elif filter_type == "laplacian":
body = cv2.Laplacian(body, ddepth=cv2.CV_64F)
elif filter_type == "regular":
go
if body.dtype != np.uint8:
# Scale the body to uint8 if essential
cv2.normalize(body, body, 0, 255, cv2.NORM_MINMAX)
body = body.astype(np.uint8)

Trendy GUI with CustomTkinter
Now I don’t learn about you however the present person interface doesn’t look very trendy to me. Don’t get me mistaken, there may be some magnificence within the fashion of the interface, however I choose cleaner, extra trendy designs. Plus we’re already on the restrict of what OpenCV gives out of the field when it comes to UI components. Yep, no buttons, textual content fields, dropdowns, checkboxes or radio buttons and no customized layouts. So let’s see how we will remodel the look and person expertise of this fundamental software to a recent and clear one.

So to get began, we first have to create a category for our app. We create two frames: the primary one comprises our filter choice on the left aspect and the second wraps the picture show. For now, let’s begin with a easy placeholder textual content. Sadly there’s no out of the field opencv element from customtkinter straight, so we might want to shortly construct our personal within the subsequent few steps. However let’s first end the fundamental UI structure.
import customtkinter
class App(customtkinter.CTk):
def __init__(self) -> None:
tremendous().__init__()
self.title("Webcam Stream")
self.geometry("800x600")
self.filter_var = customtkinter.IntVar(worth=0)
# Body for filters
self.filters_frame = customtkinter.CTkFrame(self)
self.filters_frame.pack(aspect="left", fill="each", broaden=False, padx=10, pady=10)
# Body for picture show
self.image_frame = customtkinter.CTkFrame(self)
self.image_frame.pack(aspect="proper", fill="each", broaden=True, padx=10, pady=10)
self.image_display = customtkinter.CTkLabel(self.image_frame, textual content="Loading...")
self.image_display.pack(fill="each", broaden=True, padx=10, pady=10)
app = App()
app.mainloop()

Filter Radio Buttons
Now that the skeleton is constructed, we will begin filling in our elements. For the left aspect, I can be utilizing the identical listing of filter_types
to populate a bunch of radio buttons to pick out the filter.
# Create radio buttons for every filter kind
self.filter_var = customtkinter.IntVar(worth=0)
for filter_id, filter_type in enumerate(filter_types):
rb_filter = customtkinter.CTkRadioButton(
self.filters_frame,
textual content=filter_type.capitalize(),
variable=self.filter_var,
worth=filter_id,
)
rb_filter.pack(padx=10, pady=10)
if filter_id == 0:
rb_filter.choose()

Picture Show Part
Now we will get began on the fascinating half, how one can get our OpenCV frames to point out up within the picture element. As a result of there’s no built-in element, let’s create our personal primarily based on the CTKLabel
. This permits us to show a loading textual content whereas the webcam stream is beginning up.
...
class CTkImageDisplay(customtkinter.CTkLabel):
"""
A reusable ctk widget widget to show opencv pictures.
"""
def __init__(
self,
grasp: Any,
) -> None:
self._textvariable = customtkinter.StringVar(grasp, "Loading...")
tremendous().__init__(
grasp,
textvariable=self._textvariable,
picture=None,
)
...
class App(customtkinter.CTk):
def __init__(self) -> None:
...
self.image_display = CTkImageDisplay(self.image_frame)
self.image_display.pack(fill="each", broaden=True, padx=10, pady=10)
To this point nothing has modified, we merely swapped out the present label with our customized class implementation. In our CTKImageDisplay
class we will outline an perform to point out a picture within the element, let’s name it set_frame
.
import cv2
import numpy.typing as npt
from PIL import Picture
class CTkImageDisplay(customtkinter.CTkLabel):
...
def set_frame(self, body: npt.NDArray) -> None:
"""
Set the body to be displayed within the widget.
Args:
body: The brand new body to show, in opencv format (BGR).
"""
target_width, target_height = body.form[1], body.form[0]
# Convert the body to PIL Picture format
frame_rgb = cv2.cvtColor(body, cv2.COLOR_BGR2RGB)
frame_pil = Picture.fromarray(frame_rgb, "RGB")
ctk_image = customtkinter.CTkImage(
light_image=frame_pil,
dark_image=frame_pil,
dimension=(target_width, target_height),
)
self.configure(picture=ctk_image, textual content="")
self._textvariable.set("")
Let’s digest this. First we have to understand how large our picture element can be, we will extract that data from the form property of our picture array. To show the picture in tkinter
, we want a Pillow Picture
kind, we can not straight use the OpenCV array. To transform an OpenCV array to Pillow, we first have to convert the colour area from BGR to RGB after which we will use the Picture.fromarray
perform to create the Pillow Picture object. Subsequent we will create a CTKImage, the place we use the identical picture regardless of the theme and set the dimensions based on our body. Lastly we will use the configure technique to set the picture in our body. On the finish, we additionally reset the textual content variable to take away the “Loading…” textual content, though it will theoretically be hidden behind the picture.
To shortly check this, we will set the primary picture of our webcam within the constructor. (We are going to see in a second why this isn’t such a good suggestion)
class App(customtkinter.CTk):
def __init__(self) -> None:
...
cap = cv2.VideoCapture(0)
_, frame0 = cap.learn()
self.image_display.set_frame(frame0)
If you happen to run this, you’ll discover that the window takes a bit longer to pop up, however after a brief delay you must see a static picture out of your webcam.
NOTE: If you happen to don’t have a webcam prepared you may also simply use an area video file by passing the file path to the
cv2.VideoCapture
constructor name.

Now this isn’t very thrilling, because the body doesn’t replace but. So let’s see what occurs if we attempt to do that naively.
class App(customtkinter.CTk):
def __init__(self) -> None:
...
cap = cv2.VideoCapture(0)
whereas True:
ret, body = cap.learn()
if not ret:
break
self.image_display.set_frame(body)
Nearly the identical as earlier than, besides now we run the body loop as we did within the earlier chapter with the OpenCV GUI. If you happen to run this, you will notice… precisely nothing. The window by no means exhibits up, since we’re creating an infinite loop within the constructor of the app! That is additionally the explanation why this system solely confirmed up after a delay within the earlier instance, the opening of the Webcam stream is a blocking operation, and the occasion loop for the window can not run, so it doesn’t present up but.
So let’s repair this by including a barely higher implementation that enables the gui occasion loop to run whereas we additionally replace the body each every so often. We are able to use the after
technique of tkinter
to schedule a perform name whereas yielding the method throughout the wait time.
...
self.cap = cv2.VideoCapture(0)
self.after(10, self.update_frame)
def update_frame(self) -> None:
"""
Replace the displayed body.
"""
ret, body = self.cap.learn()
if not ret:
return
self.image_display.set_frame(body)
self.after(10, self.update_frame)
So now we nonetheless arrange the webcam stream within the constructor, so we haven’t solved that drawback but. However at the least we will see a steady stream of frames in our picture element.

Making use of Filters
Now that the body loop is working. we will re-implement our filters from the start and apply them to our webcam stream. Within the update_frame perform, we will examine the present filter variable and apply the corresponding filter perform.
def update_frame(self) -> None:
...
# Get the chosen filter kind
filter_id = self.filter_var.get()
filter_type = filter_types[filter_id]
if filter_type == "grayscale":
body = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
elif filter_type == "blur":
body = cv2.GaussianBlur(body, ksize=(15, 15), sigmaX=0)
elif filter_type == "threshold":
grey = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
_, body = cv2.threshold(grey, thresh=127, maxval=255, kind=cv2.THRESH_BINARY)
elif filter_type == "canny":
body = cv2.Canny(body, threshold1=100, threshold2=200)
elif filter_type == "sobel":
body = cv2.Sobel(body, ddepth=cv2.CV_64F, dx=1, dy=0, ksize=5)
elif filter_type == "laplacian":
body = cv2.Laplacian(body, ddepth=cv2.CV_64F)
elif filter_type == "regular":
go
if body.dtype != np.uint8:
# Scale the body to uint8 if essential
cv2.normalize(body, body, 0, 255, cv2.NORM_MINMAX)
body = body.astype(np.uint8)
if len(body.form) == 2: # Convert grayscale to BGR
body = cv2.cvtColor(body, cv2.COLOR_GRAY2BGR)
self.image_display.set_frame(body)
self.after(10, self.update_frame)
And now we’re again to the complete performance of the applying, you may choose any filter on the left aspect and it is going to be utilized in real-time to the webcam feed!

Multithreading and Synchronization
Now though the applying runs as is, there are some issues with the present method we run our body loop. At present every thing runs in a single thread, the principle GUI thread. For this reason to start with, we don’t instantly see the window pop up, our webcam initialization blocks the principle thread. Now think about, if we did some heavier picture processing, perhaps working the pictures by way of neural community, you wouldn’t need your person interface to all the time be blocked whereas the community is working inference. It will result in a really unresponsive person expertise when clicking the UI components!

A greater option to deal with this in our software is to separate the picture processing from the person interface. Typically that is virtually all the time a good suggestion to separate your GUI logic from any kind of non-trivial processing. So in our case, we are going to run a separate thread that’s liable for the picture loop. It should learn the frames from the webcam stream and apply the filters.

NOTE: Python threads are usually not “actual” threads in a way that they don’t have the aptitude to run on completely different logical cpu cores and therefore is not going to actually run in parallel. In Python multithreading the context will change between the threads, however because of the GIL, the worldwide interpreter lock, a single python course of can solely run one bodily thread. If you need “actual” parallel processing, you will have to make use of multiprocessing. Since our course of right here will not be CPU certain however really I/O certain, multithreading suffices.
class App(customtkinter.CTk):
def __init__(self) -> None:
...
self.webcam_thread = threading.Thread(goal=self.run_webcam_loop, daemon=True)
self.webcam_thread.begin()
def run_webcam_loop(self) -> None:
"""
Run the webcam loop in a separate thread.
"""
self.cap = cv2.VideoCapture(0)
if not self.cap.isOpened():
return
whereas True:
ret, body = self.cap.learn()
if not ret:
break
# Filters
...
self.image_display.set_frame(body)
If you happen to run this, you’ll now see that our window opens up instantly and we even see our loading textual content whereas the webcam stream is opening up. Nevertheless, as quickly because the stream begins, the frames begin to flicker. Relying on a number of components, you may expertise completely different visible artifacts or errors at this stage.
Warning: flashing picture

Now why is that this taking place? The issue is that we’re concurrently attempting to replace the brand new body whereas the interior refresh loop of the person interface may be utilizing the knowledge of the array to attract it on the display. They’re each competing for a similar body array.
It’s typically not a good suggestion to straight replace the UI components from a unique thread, in some frameworks this may even be prevented and can increase exceptions. In Tkinter, we will do it, however we are going to get bizarre outcomes. We want some kind of synchronization between our threads. That’s the place the Queue
comes into play.

You’re in all probability accustomed to queues from the grocery retailer or theme parks. The idea of the queue right here could be very comparable: the primary ingredient that goes into the queue additionally leaves first (First In First Out).
On this case, we really simply desire a queue with a single ingredient, a single slot queue. The queue implementation in Python is thread-safe, that means we will put and get objects from the queue from completely different threads. Excellent for our use case, the processing thread will put the picture arrays to the queue and the GUI thread will attempt to get a component, however not block if the queue is empty.
class App(customtkinter.CTk):
def __init__(self) -> None:
...
self.queue = queue.Queue(maxsize=1)
self.webcam_thread = threading.Thread(goal=self.run_webcam_loop, daemon=True)
self.webcam_thread.begin()
self.frame_loop_dt_ms = 16 # ~60 FPS
self.after(self.frame_loop_dt_ms, self._update_frame)
def _update_frame(self) -> None:
"""
Replace the body within the picture show widget.
"""
attempt:
body = self.queue.get_nowait()
self.image_display.set_frame(body)
besides queue.Empty:
go
self.after(self.frame_loop_dt_ms, self._update_frame)
def run_webcam_loop(self) -> None:
...
whereas True:
...
self.queue.put(body)
Discover how we transfer the direct name to the set_frame
perform from the webcam loop which runs in its personal thread to the _update_frame
perform that’s working on the principle thread, repeatedly scheduled in 16ms intervals.
Right here it’s necessary to make use of the get_nowait
perform in the principle thread, in any other case if we’d use the get perform, we’d be blocking there. This name does not block, however raises a queue.Empty
exception if there’s no ingredient to fetch so now we have to catch this and ignore it. Within the webcam loop, we will use the blocking put perform as a result of it doesn’t matter that we block the run_webcam_loop
, there’s nothing else needing to run there.

And now every thing is working as anticipated, no extra flashing frames!
Conclusion
Combining a UI framework like Tkinter with OpenCV permits us to construct trendy trying purposes with an interactive graphical person interface. Because of the UI working in the principle thread, we run the picture processing in a separate thread and synchronize the info between the threads utilizing a single-slot queue. You will discover a cleaned up model of this demo within the repository under with a extra modular construction. Let me know in the event you construct one thing fascinating with this method. Take care!
Checkout the complete supply code within the GitHub repo:
https://github.com/trflorian/ctk-opencv