最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

javascript - Table row sorting & string performance - Stack Overflow

programmeradmin0浏览0评论

I've got a table with a large number of rows that is not suitable for paging. The rows in this table can be sorted by clicking a column header which triggers a client side sorting algoritm based on .html The function dynamically adds an "expando" property to each row, thereby caching the key pre-sort:

row.sortKey = $(row).children('td').eq(column).text().toUpperCase();

As you can see, the property values are simply set to the contents of the column that was clicked and they are discarded (nulled) once the sorting has finished. Performance is actually surprisingly good - but columns that contain more text appear to be slower to sort.

As the sorting is only done to make it easier for the user to find the row(s) that they are looking for I figured things could be speeded up by cropping the key values with substr(0,7) or something (eight chars should provide more than enough precision). However, I found that doing a substr() incurred more performance cost than it saved, and if anything it made sorting slower.

Does anyone know any (other) optimisations that can be applied to this method?

Here is a more plete example:

var rows = $table.find('tbody > tr').get();
$.each(rows, function(index, row) {
    row.sortKey = $(row).children('td').eq(column).text().toUpperCase()
})
rows.sort(function(a, b) {
    if (a.sortKey < b.sortKey) return -1
    if (a.sortKey > b.sortKey) return 1
    return 0
})
$.each(rows, function(index, row) {
    $table.children('tbody').append(row)
    row.sortKey = null
})

EDIT: Here is the final version of my code, incorporating many of the optimisations provided in the answers below:

$('table.sortable').each(function() {
    var $table = $(this);
    var storage = new Array();
    var rows = $table.find('tbody > tr').get();
    $('th', $table).each(function(column) {
        $(this).click(function() {
            var colIndex = this.cellIndex;
            for(i=0;i<rows.length;i++) {
                rows[i].sortKey = $(rows[i].childNodes[colIndex]).text().toUpperCase();
            }
            rows.sort(function(a, b) {
                if (a.sortKey < b.sortKey) return -1;
                if (a.sortKey > b.sortKey) return 1;
                return 0;
            });
            for(i=0;i<rows.length;i++) {
                storage.push(rows[i]);
                rows[i].sortKey = null;
            }
            $table.children('tbody').append(storage);
        });
    });
});

I've got a table with a large number of rows that is not suitable for paging. The rows in this table can be sorted by clicking a column header which triggers a client side sorting algoritm based on http://www.exforsys./tutorials/jquery/jquery-basic-alphabetical-sorting.html The function dynamically adds an "expando" property to each row, thereby caching the key pre-sort:

row.sortKey = $(row).children('td').eq(column).text().toUpperCase();

As you can see, the property values are simply set to the contents of the column that was clicked and they are discarded (nulled) once the sorting has finished. Performance is actually surprisingly good - but columns that contain more text appear to be slower to sort.

As the sorting is only done to make it easier for the user to find the row(s) that they are looking for I figured things could be speeded up by cropping the key values with substr(0,7) or something (eight chars should provide more than enough precision). However, I found that doing a substr() incurred more performance cost than it saved, and if anything it made sorting slower.

Does anyone know any (other) optimisations that can be applied to this method?

Here is a more plete example:

var rows = $table.find('tbody > tr').get();
$.each(rows, function(index, row) {
    row.sortKey = $(row).children('td').eq(column).text().toUpperCase()
})
rows.sort(function(a, b) {
    if (a.sortKey < b.sortKey) return -1
    if (a.sortKey > b.sortKey) return 1
    return 0
})
$.each(rows, function(index, row) {
    $table.children('tbody').append(row)
    row.sortKey = null
})

EDIT: Here is the final version of my code, incorporating many of the optimisations provided in the answers below:

$('table.sortable').each(function() {
    var $table = $(this);
    var storage = new Array();
    var rows = $table.find('tbody > tr').get();
    $('th', $table).each(function(column) {
        $(this).click(function() {
            var colIndex = this.cellIndex;
            for(i=0;i<rows.length;i++) {
                rows[i].sortKey = $(rows[i].childNodes[colIndex]).text().toUpperCase();
            }
            rows.sort(function(a, b) {
                if (a.sortKey < b.sortKey) return -1;
                if (a.sortKey > b.sortKey) return 1;
                return 0;
            });
            for(i=0;i<rows.length;i++) {
                storage.push(rows[i]);
                rows[i].sortKey = null;
            }
            $table.children('tbody').append(storage);
        });
    });
});
Share Improve this question edited Apr 9, 2009 at 14:20 Ola Tuvesson asked Apr 8, 2009 at 13:54 Ola TuvessonOla Tuvesson 5,2212 gold badges30 silver badges38 bronze badges
Add a ment  | 

6 Answers 6

Reset to default 5

There are several problems with the example you gave. Number one problem is that you are selecting the columns using jquery inside the loops. This is a major performance penalty. If you have any control over the html code I would suggest you use normal DOM methods to get the desired colum you want to sort on. Note that sometimes when you might expect a table cell node, you may get a text node. I'll get back to that later. A for loop is faster so you may consider using that instead of $.each, but I suggest you benchmark it.

I took your example and created a table with 1000 rows. It took about 750ms on my machine to sort it. I did some optimizations (see code below) and managed to get it down to 200ms. The sorting itself took around 20ms (not bad).

var sb = [];
sb.push("<table border='1'>");
var x;
for (var i = 0; i < 1000; i++) {
    x = Math.floor(Math.random() * 1000);
    sb.push("<tr><td>data");
    sb.push(x);
    sb.push("</td></tr>");
}
sb.push("</table>");

document.write(sb.join(""));

$table = $("table");
var rows = $table.find('tbody > tr').get();
var columnIndex = 0;

var t = new Date();

$.each(rows, function(index, row) {
    row.sortKey = $(row.childNodes[columnIndex]).text();
});
alert("Sort key: " + (new Date() - t) + "ms");
t = new Date();

rows.sort(function(a, b) {
        return a.sortKey.localeCompare(b.sortKey);
});
alert("Sort: " + (new Date() - t) + "ms");
t = new Date();
var tbody = $table.children('tbody').get(0);

$.each(rows, function(index, row) {
    tbody.appendChild(row);
    delete row.sortKey;
})

alert("Table: " + (new Date() - t) + "ms");

When you write for speed you want each iteration to be as quick as possible, so don't do stuff in the loops that you can do outside of them. For example, moving $table.children('tbody').get(0); outside of the last loop sped things up enormously.

As for using DOM methods to access a column, what you need is the column index, so you could iterate over the th columns until you find the correct one (provided the html formatting is identical for th tags and td tags). You can then use that index to get the correct row child node.

Also, if the table is static and the users are liable to do more sorting on it, you should cache the rows and not delete the sortKey property. Then you save about 30% sort time. There is also the matter of the table contents. If the content is text, this sorting method is fine. If it contains numbers etc. then you need to consider that, since I am using localeCompare which is a method of the String kind.

one optimization that i can think of is to modify this code:

$.each(rows, function(index, row) {
        $table.children('tbody').append(row)
        row.sortKey = null
})

to instead of appending one row at a time, append larger chunks, or all if possible. to do this, you will need to first create a string of all rows, and then append it all at once.

use array.push and array.join to concat the string

Another performance improvement: Dont use $.each but use a regular for loop instead where you can help it. regular loops are slightly faster.

also this blog post may be useful, if not now, then in the future:

http://notetodogself.blogspot./2009/02/building-fast-jquery-plugins.html

Following mkoryak's suggestion I have now modified the code to assemble the rows in an array and then append them all in one go:

$.each(rows, function(index, row) {
    storage.push(row);
    row.sortKey = null;
});
$table.children('tbody').append(storage);

This seems to have improved performance with about 25% - sorting 1500 rows now takes ~3s as opposed to ~4s previously.

I was using the DataTables jQuery plugin for this kind of work, and I had formerly used the YUI DataTable ponent. 1.5 beta version of jQuery DataTables ponent includes either instantiating from html code, OR using AJAX data source. It's pretty straightforward for setup, and for sifting and sorting data it was very fast.

I found once I hit about 1600 lines of my rather verbose, the only way to go was with server side data loading.

Tjena Ola!

This might not even be related to your question but how many rows are we talking about? For very large number of rows - tens of thousands - tables are horribly slow. A more lightweight solution would be to use div's and css to simulate tables. It might not be semantically correct (ha! tables and semantic in the same phrase, who would have thought?), but it's MUCH faster.

发布评论

评论列表(0)

  1. 暂无评论