- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I have an LSTM-based network. I have exported it to onnx, converted to IR, and integrated it successfully to the Inference Engine.
I noticed that in order to maintain the states in the LSTM I must pass a complete sequence to the network at once and then call Infer() on the entire sequence. However, if I pass the elements one by one and call Infer() the network does not remember previous states and starts from the beginning every time.
But because I am doing this in real-time, I don't have the entire sequence apriori and I don't want to add delay.
Is there a way to get and set the hidden states of the network or any other way to maintain the states between inference calls?
I could not find this anywhere in the documentation.
Thanks in advance!
Matan
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Matanwaves,
Greetings to you.
The following tutorial, ‘Difference Between Return Sequences and Return States for LSTMs in Keras’, provides valuable information to access the hidden states of the network.
https://machinelearningmastery.com/return-sequences-and-return-states-for-lstms-in-keras/
As for your query regarding maintaining the states between inference calls, Memory layer saves the state between two infer requests.
More information about Memory layer is available at the following page:
Regards,
Munesh

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page