最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

javascript - "virtual" dom manipulation? - Stack Overflow

programmeradmin1浏览0评论

I know that doing multiple dom manipulations is bad as it forces multiple repaints.

I.e:

$('body').append('<div />')
         .append('<div />')
         .append('<div />')
         .append('<div />');

Instead a better practise is apparently:

$('body').append('<div><div></div><div></div><div></div><div></div></div>');

but I am curious about virtual manipulation

I.e:

$('<div />').append('<div />')
            .append('<div />')
            .append('<div />')
            .append('<div />')
            .appendTo('body');

is it still bad, obviously there will be some overhead from calling a function several times, but is there going to be any severe performance hits?


Reason I am asking is this:

var divs = [
    {text: 'First',  id: 'div_1', style: 'background-color: #f00;'},
    {text: 'Second', id: 'div_2', style: 'background-color: #0f0;'},
    {text: 'Third',  id: 'div_3', style: 'background-color: #00f;'},
    {text: 'Fourth', id: 'div_4', style: 'background-color: #f00;'},
    {text: 'Fifth',  id: 'div_5', style: 'background-color: #0f0;'},
    {text: 'Sixth',  id: 'div_6', style: 'background-color: #00f;'}
];

var element = $('<div />');

$.each(divs, function(i,o){
    element.append($('<div />', o));
});

$('body').append(element);

Imagine that the divs array has e from an database table describing a form (ok, i'm using div's in the example, but it can be easily replaced with inputs) or something similar.

or with the "remended" version we have:

var divs = [
    {text: 'First',  id: 'div_1', style: 'background-color: #f00;'},
    {text: 'Second', id: 'div_2', style: 'background-color: #0f0;'},
    {text: 'Third',  id: 'div_3', style: 'background-color: #00f;'},
    {text: 'Fourth', id: 'div_4', style: 'background-color: #f00;'},
    {text: 'Fifth',  id: 'div_5', style: 'background-color: #0f0;'},
    {text: 'Sixth',  id: 'div_6', style: 'background-color: #00f;'}
];

var element = '<div>';

$.each(divs, function(i,o){
    element += '<div ';

    $.each(o, function(k,v){
        if(k != 'text'){
            element += k+'="'+v+'" ';
        }            
    });

    element += '>'+o.text+'</div>';

});

element += '</div>';

$('body').append(element);

I know that doing multiple dom manipulations is bad as it forces multiple repaints.

I.e:

$('body').append('<div />')
         .append('<div />')
         .append('<div />')
         .append('<div />');

Instead a better practise is apparently:

$('body').append('<div><div></div><div></div><div></div><div></div></div>');

but I am curious about virtual manipulation

I.e:

$('<div />').append('<div />')
            .append('<div />')
            .append('<div />')
            .append('<div />')
            .appendTo('body');

is it still bad, obviously there will be some overhead from calling a function several times, but is there going to be any severe performance hits?


Reason I am asking is this:

var divs = [
    {text: 'First',  id: 'div_1', style: 'background-color: #f00;'},
    {text: 'Second', id: 'div_2', style: 'background-color: #0f0;'},
    {text: 'Third',  id: 'div_3', style: 'background-color: #00f;'},
    {text: 'Fourth', id: 'div_4', style: 'background-color: #f00;'},
    {text: 'Fifth',  id: 'div_5', style: 'background-color: #0f0;'},
    {text: 'Sixth',  id: 'div_6', style: 'background-color: #00f;'}
];

var element = $('<div />');

$.each(divs, function(i,o){
    element.append($('<div />', o));
});

$('body').append(element);

Imagine that the divs array has e from an database table describing a form (ok, i'm using div's in the example, but it can be easily replaced with inputs) or something similar.

or with the "remended" version we have:

var divs = [
    {text: 'First',  id: 'div_1', style: 'background-color: #f00;'},
    {text: 'Second', id: 'div_2', style: 'background-color: #0f0;'},
    {text: 'Third',  id: 'div_3', style: 'background-color: #00f;'},
    {text: 'Fourth', id: 'div_4', style: 'background-color: #f00;'},
    {text: 'Fifth',  id: 'div_5', style: 'background-color: #0f0;'},
    {text: 'Sixth',  id: 'div_6', style: 'background-color: #00f;'}
];

var element = '<div>';

$.each(divs, function(i,o){
    element += '<div ';

    $.each(o, function(k,v){
        if(k != 'text'){
            element += k+'="'+v+'" ';
        }            
    });

    element += '>'+o.text+'</div>';

});

element += '</div>';

$('body').append(element);
Share Improve this question edited Nov 2, 2012 at 23:57 Hailwood asked Nov 2, 2012 at 23:34 HailwoodHailwood 92.6k112 gold badges273 silver badges425 bronze badges 4
  • @nathan hayfield: "seems like its still bad" --- why so? "why not just use" --- and why not just keep it as it is? – zerkms Commented Nov 2, 2012 at 23:37
  • @nathan hayfield: "much faster"??? Can you tell the REAL difference for 5 append calls vs concatenation? The code is written for people, and it's optimized ONLY it it doesn't fit performance requirements. – zerkms Commented Nov 2, 2012 at 23:39
  • 2 Repaints are probably deferred until after the event you're handling is plete anyway. – millimoose Commented Nov 2, 2012 at 23:43
  • @millimoose Unless you make a call that requires the engine to calculate the dimensions of the node you modified, then it will get re-rendered – Ruan Mendes Commented Nov 2, 2012 at 23:45
Add a ment  | 

4 Answers 4

Reset to default 10

Firstly, although it is great to read about potential performance hits like this you should always start by measuring to see if you even have a problem.

If you cannot perceive a problem, write the most readable code.

If you can perceive a problem, measure, change and measure.

Having said all this, the last example you have posted involves elements that are not yet written to the DOM, so there would be no repaint until the appendTo adds the elements to the DOM.

I would be very surprised if you could capture a difference in speed between second and third example - and quite surprised if you could see any major difference between any of them.

If you're really worried about performance when appending nodes, you need to use documentfragments. These will allow you to append elements to the dom without repaint. John Resign has an excellent article on this topic. He notes a 200-300% increase in performance. I implemented documentfragments in one of my apps and can confirm his claim.

Whatever happened to good old markup generation at runtime? Seriously, what happened?

I agree with @Sohnee's point about importance of readability, but DOM manipulations are some of the most expensive operations a browser can perform. The option of maintaining a string of markup can be made perfectly readable and offer a user experience improvement beyond negligible.

In this jsperf, we're creating a 10x100 table at runtime - a perfectly reasonable (and not the most plex scenario by far) for data pagination. On a quad core machine running a recent version of Chrome the direct DOM manipulation script takes 60ms to plete, as opposed to 3ms for markup caching.

This is an indistinguishable difference on my setup, but what about the poor number-crunching folk sitting behind a corporate firewall and still forced to use an obsolete version of IE? What if the DOM operations required were to be heavier, with attribute manipulation and aggressively forcing re-paints/re-flows?

All I'm saying is if you ever want to optimize some javascript, this is not a bad place to start.

I think that if you start getting to the point where this kind of code is a performance bottleneck, you should be using templates instead of DOM manipulation.

But yes, using the document fragment approach (putting all nodes into a detached node before attaching to the DOM) is going to be faster in most cases, almost never slower.

  • http://mustache.github./
  • https://developers.google./closure/templates/docs/helloworld_js
  • https://developer.mozilla/en-US/docs/JavaScript_templates
发布评论

评论列表(0)

  1. 暂无评论