最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

Checking for duplicate Javascript objects - Stack Overflow

programmeradmin1浏览0评论

TL;DR version: I want to avoid adding duplicate Javascript objects to an array of similar objects, some of which might be really big. What's the best approach?

I have an application where I'm loading large amounts of JSON data into a Javascript data structure. While it's a bit more plex than this, assume that I'm loading JSON into an array of Javascript objects from a server through a series of AJAX requests, something like:

var myObjects = [];

function processObject(o) {
    myObjects.push(o);
}

for (var x=0; x<1000; x++) {
    $.getJSON('/new_object.json', processObject);
}

To plicate matters, the JSON:

  • is in an unknown schema
  • is of arbitrary length (probably not enormous, but could be in the 100-200 kb range)
  • might contain duplicates across different requests

My initial thought is to have an additional object to store a hash of each object (via JSON.stringify?) and check against it on each load, like this:

var myHashMap = {};

function processObject(o) {
    var hash = JSON.stringify(o);
    // is it in the hashmap?
    if (!(myHashMap[hash])) {
        myObjects.push(o);
        // set the hashmap key for future checks
        myHashMap[hash] = true;
    }
    // else ignore this object
}

but I'm worried about having property names in myHashMap that might be 200 kb in length. So my questions are:

  • Is there a better approach for this problem than the hashmap idea?
  • If not, is there a better way to make a hash function for a JSON object of arbitrary length and schema than JSON.stringify?
  • What are the possible issues with super-long property names in an object?

TL;DR version: I want to avoid adding duplicate Javascript objects to an array of similar objects, some of which might be really big. What's the best approach?

I have an application where I'm loading large amounts of JSON data into a Javascript data structure. While it's a bit more plex than this, assume that I'm loading JSON into an array of Javascript objects from a server through a series of AJAX requests, something like:

var myObjects = [];

function processObject(o) {
    myObjects.push(o);
}

for (var x=0; x<1000; x++) {
    $.getJSON('/new_object.json', processObject);
}

To plicate matters, the JSON:

  • is in an unknown schema
  • is of arbitrary length (probably not enormous, but could be in the 100-200 kb range)
  • might contain duplicates across different requests

My initial thought is to have an additional object to store a hash of each object (via JSON.stringify?) and check against it on each load, like this:

var myHashMap = {};

function processObject(o) {
    var hash = JSON.stringify(o);
    // is it in the hashmap?
    if (!(myHashMap[hash])) {
        myObjects.push(o);
        // set the hashmap key for future checks
        myHashMap[hash] = true;
    }
    // else ignore this object
}

but I'm worried about having property names in myHashMap that might be 200 kb in length. So my questions are:

  • Is there a better approach for this problem than the hashmap idea?
  • If not, is there a better way to make a hash function for a JSON object of arbitrary length and schema than JSON.stringify?
  • What are the possible issues with super-long property names in an object?
Share Improve this question asked Jul 6, 2011 at 21:30 nrabinowitznrabinowitz 55.7k10 gold badges151 silver badges169 bronze badges 3
  • do you control the server? any way to add a unique ID to your objects that you can key off of? – NG. Commented Jul 6, 2011 at 21:36
  • I agree with SB. Some sort of unique key per object would make this trivial. Can the problem be rethought at the origin of the data to create such a key? If that can't be easily done, can you identify a small subject of properties of the object that uniquely identify it such if they are the same, then you can consider the object the same and make your hash out of just that subset of properties? – jfriend00 Commented Jul 6, 2011 at 21:38
  • @SB, @jfriend00 - a unique ID would make this much easier, but for various reasons it's not feasible. Assume I don't control the server and that the schema of the object is entirely black-boxed (again, it's a little more plicated, but this is essentially the case). – nrabinowitz Commented Jul 6, 2011 at 21:49
Add a ment  | 

2 Answers 2

Reset to default 4

I'd suggest you create an MD5 hash of the JSON.stringify(o) and store that in your hashmap with a reference to your stored object as the data for the hash. And to make sure that there are no object key order differences in the JSON.stringify(), you have to create a copy of the object that orders the keys.

Then, when each new object es in, you check it against the hash map. If you find a match in the hash map, then you pare the ining object with the actual object that you've stored to see if they are truly duplicates (since there can be MD5 hash collisions). That way, you have a manageable hash table (with only MD5 hashes in it).

Here's code to create a canonical string representation of an object (including nested objects or objects within arrays) that handles object keys that might be in a different order if you just called JSON.stringify().

// Code to do a canonical JSON.stringify() that puts object properties 
// in a consistent order
// Does not allow circular references (child containing reference to parent)
JSON.stringifyCanonical = function(obj) {
    // patible with either browser or node.js
    var Set = typeof window === "object" ? window.Set : global.Set;

    // poor man's Set polyfill
    if (typeof Set !== "function") {
        Set = function(s) {
            if (s) {
                this.data = s.data.slice();
            } else {
                this.data = [];
            }
        };
        Set.prototype = {
            add: function(item) {
                this.data.push(item);
            },
            has: function(item) {
                return this.data.indexOf(item) !== -1;
            }
        };
    }

    function orderKeys(obj, parents) {
        if (typeof obj !== "object") {
            throw new Error("orderKeys() expects object type");
        }
        var set = new Set(parents);
        if (set.has(obj)) {
            throw new Error("circular object in stringifyCanonical()");
        }
        set.add(obj);
        var tempObj, item, i;
        if (Array.isArray(obj)) {
            // no need to re-order an array
            // but need to check it for embedded objects that need to be ordered
            tempObj = [];
            for (i = 0; i < obj.length; i++) {
                item = obj[i];
                if (typeof item === "object") {
                    tempObj[i] = orderKeys(item, set);
                } else {
                    tempObj[i] = item;
                }
            }
        } else {
            tempObj = {};
            // get keys, sort them and build new object
            Object.keys(obj).sort().forEach(function(item) {
                if (typeof obj[item] === "object") {
                    tempObj[item] = orderKeys(obj[item], set);
                } else {
                    tempObj[item] = obj[item];
                }
            });
        }
        return tempObj;
    }

    return JSON.stringify(orderKeys(obj));
}

And, the algorithm

var myHashMap = {};

function processObject(o) {
    var stringifiedCandidate = JSON.stringifyCanonical(o);
    var hash = CreateMD5(stringifiedCandidate);
    var list = [], found = false;
    // is it in the hashmap?
    if (!myHashMap[hash] {
        // not in the hash table, so it's a unique object
        myObjects.push(o);
        list.push(myObjects.length - 1);    // put a reference to the object with this hash value in the list
        myHashMap[hash] = list;             // store the list in the hash table for future parisons
    } else {
        // the hash does exist in the hash table, check for an exact object match to see if it's really a duplicate
        list = myHashMap[hash];             // get the list of other object indexes with this hash value
        // loop through the list
        for (var i = 0; i < list.length; i++) {
            if (stringifiedCandidate === JSON.stringifyCanonical(myObjects[list[i]])) {
                found = true;       // found an exact object match
                break;
            }
        }
        // if not found, it's not an exact duplicate, even though there was a hash match
        if (!found) {
            myObjects.push(o);
            myHashMap[hash].push(myObjects.length - 1);
        }
    }
}

Test case for jsonStringifyCanonical() is here: https://jsfiddle/jfriend00/zfrtpqcL/

  1. Maybe. For example if You know what kind object goes by You could write better indexing and searching system than JS objects' keys. But You could only do that with JavaScript and object keys are written in C...
  2. Must Your hashing be lossless or not? If can than try to lose pression (MD5). I guessing You will lose some speed and gain some memory. By the way, do JSON.stringify(o) guarantees same key ordering. Because {foo: 1, bar: 2} and {bar: 2, foo: 1} is equal as objects, but not as strings.
  3. Cost memory

One possible optimization:

Instead of using getJSON use $.get and pass "text" as dataType param. Than You can use result as Your hash and convert to object afterwards.

Actually by writing last sentence I though about another solution:

  • Collect all results with $.get into array
  • Sort it with buildin (c speed) Array.sort
  • Now You can easily spot and remove duplicates with one for

Again different JSON strings can make same JavaScript object.

发布评论

评论列表(0)

  1. 暂无评论