最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

json - What is the most efficient way in JavaScript to parse huge amounts of data from a file - Stack Overflow

programmeradmin4浏览0评论

What is the most efficient way in JavaScript to parse huge amounts of data from a file?

Currently I use JSON parse to serialize an unpressed 250MB file, which is really slow. Is there a simple and fast way to read a lot of data in JavaScript from a file without looping through every character? The data stored in the file are only a few floating point arrays?

UPDATE: The file contains a 3d mesh, 6 buffers (vert, uv etc). Also the buffers need to be presented as typed arrays. streaming is not a option because the file has to be fully loaded before the graphics engine can continue. Maybe a better question is how to transfer huge typed arrays from a file to javascript in the most efficient way.

What is the most efficient way in JavaScript to parse huge amounts of data from a file?

Currently I use JSON parse to serialize an unpressed 250MB file, which is really slow. Is there a simple and fast way to read a lot of data in JavaScript from a file without looping through every character? The data stored in the file are only a few floating point arrays?

UPDATE: The file contains a 3d mesh, 6 buffers (vert, uv etc). Also the buffers need to be presented as typed arrays. streaming is not a option because the file has to be fully loaded before the graphics engine can continue. Maybe a better question is how to transfer huge typed arrays from a file to javascript in the most efficient way.

Share Improve this question edited Apr 2, 2013 at 12:03 Kaj Dijkstra asked Apr 2, 2013 at 11:19 Kaj DijkstraKaj Dijkstra 3271 gold badge4 silver badges14 bronze badges 2
  • In the browser or in Node.js? – Octavian Helm Commented Apr 2, 2013 at 11:22
  • 1 Why is the file so big and why does it have to be the browser? – Thinking Sites Commented Apr 2, 2013 at 11:38
Add a ment  | 

5 Answers 5

Reset to default 4

I would remend a SAX based parser for these kind of JavaScript or a stream parser.

DOM parsing would load the whole thing in memory and this is not the way to go by for large files like you mentioned.

For Javascript based SAX Parsing (in XML) you might refer to https://code.google./p/jssaxparser/

and

for JSON you might write your own, the following link demonstrates how to write a basic SAX based parser in Javascript http://ajaxian./archives/javascript-sax-based-parser

Have you tried encoding it to a binary and transferring it as a blob?

https://developer.mozilla/en-US/docs/DOM/XMLHttpRequest/Sending_and_Receiving_Binary_Data

http://www.htmlgoodies./html5/tutorials/working-with-binary-files-using-the-javascript-filereader-.html#fbid=LLhCrL0KEb6

There isn't a really good way of doing that, because the whole file is going to be loaded into memory and we all know that all of them have big memory leaks. Can you not instead add some paging for viewing the contents of that file?

Check if there are any plugins that allow you to read the file as a stream, that will improve this greatly.

UPDATE

http://www.html5rocks./en/tutorials/file/dndfiles/

You might want to read about the new HTML5 API's to read local files. You will have the issue with downloading 250mb of data still tho.

I can think of 1 solution and 1 hack

SOLUTION: Extending the split the data in chunks: it boils down to http protocol. REST parts on the notion that http has enough "language" for most client-server scenarios.

You can setup on the client a request header Content-len to establish how much data you need per request

Then on the backend have some options http://httpstatus.es

  • Reply a 413 if the server is simply unable to get that much data from the db
  • 417 if the server is able to reply but not under the requested header (Content-len)
  • 206 with the provided chunk, letting know the client "there is more from where that came from"

HACK: Use Websocket and get the binary file. Then use the html5 FileAPI to load it into memory. This is likely to fail though because its not the download causing the problem, but the parsing of an almost-endless JS object

You're out of luck on the browser. Not only do you have to download the file, but you'll have to parse the json regardless. Parse it on the server, break it into smaller chunks, store that data into the db, and query for what you need.

发布评论

评论列表(0)

  1. 暂无评论