A. What I am trying to implement.
A web application allowing real-time speech recognition inside web browser (like this).
B. Technologies I am currently thinking of using to achieve A.
- JavaScript
- Node.js
- WebRTC
- Microsoft Speech API or Pocketsphinx.js or something else (cannot use Web Speech API)
C. Very basic workflow
- Web browser establishes connection to Node server (server acts as a signaling server and also serves static files)
- Web browser acquires audio stream using getUserMedia() and sends user's voice to Node server
- Node server passes audio stream being received to speech recognition engine for analysis
- Speech recognition engine returns result to Node server
- Node server sends text result back to initiating web browser
- (Node server performs step 1 to 5 to process requests from other browsers)
D. Questions
- Would Node.js be suitable to achieve C?
- How could I pass received audio streams from my Node server to a speech recognition engine running separately from the server?
- Could my speech recognition engine be running as another Node application (if I use Pocketsphinx)? So my Node server communicates to my Node speech recognition server.
A. What I am trying to implement.
A web application allowing real-time speech recognition inside web browser (like this).
B. Technologies I am currently thinking of using to achieve A.
- JavaScript
- Node.js
- WebRTC
- Microsoft Speech API or Pocketsphinx.js or something else (cannot use Web Speech API)
C. Very basic workflow
- Web browser establishes connection to Node server (server acts as a signaling server and also serves static files)
- Web browser acquires audio stream using getUserMedia() and sends user's voice to Node server
- Node server passes audio stream being received to speech recognition engine for analysis
- Speech recognition engine returns result to Node server
- Node server sends text result back to initiating web browser
- (Node server performs step 1 to 5 to process requests from other browsers)
D. Questions
- Would Node.js be suitable to achieve C?
- How could I pass received audio streams from my Node server to a speech recognition engine running separately from the server?
- Could my speech recognition engine be running as another Node application (if I use Pocketsphinx)? So my Node server communicates to my Node speech recognition server.
- source code behind your link is at : src.chromium.org/viewvc/chrome/trunk/src/content/browser/speech you may want to look at how THEY implement it to inform your architecture?? – Robert Rowntree Commented Jun 1, 2014 at 20:57
2 Answers
Reset to default 9Would Node.js be suitable to achieve C?
Yes, though there are no hard requirements for that. Some people are running servers with gstreamer, for example check
http://kaljurand.github.io/dictate.js/
node should be fine too.
How could I pass received audio streams from my Node server to a speech recognition engine running separately from the server?
There are many ways for node-to-node communication. One of them is http://socket.io. There are also plain sockets. The particular framework depends on your requirements for fault-tolerance and scalability.
Could my speech recognition engine be running as another Node application (if I use Pocketsphinx)? So my Node server communicates to my Node speech recognition server.
Yes, sure. You can create a node module to warp pocketsphinx API.
UPDATE: check this, it should be similar to what you need:
http://github.com/cmusphinx/node-pocketsphinx
You should contact Andre Natal, who has shown demos similar to this at last fall's Firefox Summit, and is now on a Google Summer of Code project implementing offline speech recognition in Firefox/FxOS: http://cmusphinx.sourceforge.net/2014/04/speech-projects-on-gsoc-2014/