最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

javascript - Ramda recommendation for removing duplicates from a slightly nested array - Stack Overflow

programmeradmin4浏览0评论

We're trying to utilise Ramda to avoid some brute-force programming. We have an array of objects that can look like this:

[
{id: "001", failedReason: [1000]},
{id: "001", failedReason: [1001]},
{id: "001", failedReason: [1002]},
{id: "001", failedReason: [1000]},
{id: "001", failedReason: [1000, 1003]},
{id: "002", failedReason: [1000]}
]

and we'd like to transform it so that it looks like this:

[
{id: "001", failedReason: [1000, 1001, 1002, 1003]},
{id: "002", failedReason: [1000]}
]

Essentially it reduces the array based on the id, and accumulates a sub-"failedReason" array containing all of the "failedReasons" for that id. We were hoping some Ramda magic might do this but so far we haven't found a nice means. Any ideas would be appreciated.

We're trying to utilise Ramda to avoid some brute-force programming. We have an array of objects that can look like this:

[
{id: "001", failedReason: [1000]},
{id: "001", failedReason: [1001]},
{id: "001", failedReason: [1002]},
{id: "001", failedReason: [1000]},
{id: "001", failedReason: [1000, 1003]},
{id: "002", failedReason: [1000]}
]

and we'd like to transform it so that it looks like this:

[
{id: "001", failedReason: [1000, 1001, 1002, 1003]},
{id: "002", failedReason: [1000]}
]

Essentially it reduces the array based on the id, and accumulates a sub-"failedReason" array containing all of the "failedReasons" for that id. We were hoping some Ramda magic might do this but so far we haven't found a nice means. Any ideas would be appreciated.

Share Improve this question asked Feb 1, 2017 at 21:25 GatmandoGatmando 2,2693 gold badges31 silver badges62 bronze badges
Add a ment  | 

3 Answers 3

Reset to default 8

I can't easily test it on my phone, but something like this should work:

pipe(
  groupBy(prop('id')), 
  map(pluck('failedReason')),
  map(flatten),
  map(uniq)
)

Update

I just got around to looking at this on a puter, and noted that the output wasn't quite what you were looking for. Adding two more steps would fix it:

pipe(
  groupBy(prop('id')), 
  map(pluck('failedReason')),
  map(flatten),
  map(uniq),
  toPairs,
  map(zipObj(['id', 'failedReason']))
)

You can see this in action on the Ramda REPL.

You could define a wrapper type which satisfies the requirements of Monoid. You could then simply use R.concat to bine values of the type:

//  Thing :: { id :: String, failedReason :: Array String } -> Thing
function Thing(record) {
  if (!(this instanceof Thing)) return new Thing(record);
  this.value = {id: record.id, failedReason: R.uniq(record.failedReason)};
}

//  Thing.id :: Thing -> String
Thing.id = function(thing) {
  return thing.value.id;
};

//  Thing.failedReason :: Thing -> Array String
Thing.failedReason = function(thing) {
  return thing.value.failedReason;
};

//  Thing.empty :: () -> Thing
Thing.empty = function() {
  return Thing({id: '', failedReason: []});
};

//  Thing#concat :: Thing ~> Thing -> Thing
Thing.prototype.concat = function(other) {
  return Thing({
    id: Thing.id(this) || Thing.id(other),
    failedReason: R.concat(Thing.failedReason(this), Thing.failedReason(other))
  });
};

//  f :: Array { id :: String, failedReason :: Array String }
//    -> Array { id :: String, failedReason :: Array String }
var f =
R.pipe(R.map(Thing),
       R.groupBy(Thing.id),
       R.map(R.reduce(R.concat, Thing.empty())),
       R.map(R.prop('value')),
       R.values);

f([
  {id: '001', failedReason: [1000]},
  {id: '001', failedReason: [1001]},
  {id: '001', failedReason: [1002]},
  {id: '001', failedReason: [1000]},
  {id: '001', failedReason: [1000, 1003]},
  {id: '002', failedReason: [1000]}
]);
// => [{"id": "001", "failedReason": [1000, 1001, 1002, 1003]},
//     {"id": "002", "failedReason": [1000]}]

I'm sure you could give the type a better name than Thing. ;)

For fun, and mainly to explore the advantages of Ramda, I tried to e up with a "one liner" to do the same data conversion in plain ES6... I now fully appreciate the simplicity of Scott's answer :D

I thought I'd share my result because it nicely illustrates what a clear API can do in terms of readability. The chain of piped maps, flatten and uniq is so much easier to grasp...

I'm using Map for grouping and Set for filtering duplicate failedReason.

const data = [ {id: "001", failedReason: [1000]}, {id: "001", failedReason: [1001]},  {id: "001", failedReason: [1002]}, {id: "001", failedReason: [1000]}, {id: "001", failedReason: [1000, 1003]}, {id: "002", failedReason: [1000]} ];

const converted = Array.from(data
  .reduce((map, d) => map.set(
      d.id, (map.get(d.id) || []).concat(d.failedReason)
    ), new Map())
  .entries())
  .map(e => ({ id: e[0], failedReason: Array.from(new Set(e[1])) }));
  
console.log(converted);

If at east the MapIterator and SetIterators would've had a .map or even a.toArray method the code would've been a bit cleaner.

发布评论

评论列表(0)

  1. 暂无评论