CNN Example - How to input data to the synthesised block #1173
-
Hi folks, HLS4ML noob here! For the CNN example project from HLS4ML's tutorial repo on github < https://github.com/fastmachinelearning/hls4ml-tutorial/blob/main/part6_cnns.ipynb >, I've synthesised the model in Vivado HLS and opened the generated IP in Vivado. This is probably a very basic question but how do you actually input data into the model? As there are 3x 16 bit input vectors, I imagine one needs to enter 16 bits of the "Red" part of the pixel, 16 bits of the "Green" part of the pixel, and 16 bits of the "Blue" part of the pixel into the input vector ports, clock in all the pixels like with a raster, and drive the ready and done (etc.) signals like regular AXI transactions, but I don't know if there's a normal way to get an image into a CNN when doing it embedded, or if there's just a known way with these sorts of cores. Thanks in advance, |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 4 replies
-
The model accepts 3x16-bit input vectors using AXI4 interface. If there is a single input port, you basically pack the RGB pixel data into a 48-bit input. Set Data transfer occurs on every clock cycle where both |
Beta Was this translation helpful? Give feedback.
-
Can I ask how you managed to get the IP into Vivado in the first place? There appears to be a step missing in the tutorials, even though I explicitly asked for an IP to be exported & the folders appear in the directory I have been unsuccessful importing it into Vivado. |
Beta Was this translation helpful? Give feedback.
-
I made a walk through guide as I went along but it's on my work computer,
so I'll get back to you on Monday
…On Sat, 1 Feb 2025, 15:56 DrBwts, ***@***.***> wrote:
Can I ask how you managed to get the IP into Vivado in the first place?
There appears to be a step missing in the tutorials, even though I
explicitly asked for an IP to be exported & the folders appear in the
directory I have been unsuccessful importing it into Vivado.
—
Reply to this email directly, view it on GitHub
<#1173 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOUUSHZOIW5YHTV3UHRZSTL2NTVCZAVCNFSM6AAAAABVW2ARU2VHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTEMBSGY4DCOA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
.com>
|
Beta Was this translation helpful? Give feedback.
The model accepts 3x16-bit input vectors using AXI4 interface. If there is a single input port, you basically pack the RGB pixel data into a 48-bit input. Set
ap_start
to high. Then you drive the input in synchronization with the clock signal (ap_clk
). You also need to asserttvalid
signals to '1' to indicate that valid data is ready on the input ports.Data transfer occurs on every clock cycle where both
tvalid
(from your side) andtready
(from the CNN IP block) are high, as per the AXI4-Stream handshaking protocol. On each clock cycle wheretvalid
andtready
are both high, the current input data can be consumed and you can provide the next pixel's data on the following cycle. You repeat…