最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

postgresql - Stratified sampling using SQL given an absolute sample size - Stack Overflow

programmeradmin0浏览0评论

I have the following population:

a
b
b
c
c
c
c

I am looking for a SQL statement to generate a the stratified sample of arbitrary size. Let's say for this example, I would like a sample size of 4. I would expect the output to be:

a
b
c
c

I have the following population:

a
b
b
c
c
c
c

I am looking for a SQL statement to generate a the stratified sample of arbitrary size. Let's say for this example, I would like a sample size of 4. I would expect the output to be:

a
b
c
c
Share Improve this question edited Feb 7 at 11:52 Zegarek 26.6k5 gold badges24 silver badges30 bronze badges asked Feb 4 at 5:08 Saqib AliSaqib Ali 4,44812 gold badges64 silver badges110 bronze badges 4
  • 5 Please clarify what rules you want to apply to get a stratified sample. With 1000 x A and 1000 x B and 2 x C and a sample size of 6, what result do you expect? Dismiss C completely, because its proportion is too small to be considered, thus ending up with CCCCCC? Have each strato at least once in the result and then fill up proportinal, thus getting either AAABBC or AABBBC? Get as many rows per strato as possible, thus getting AABBCC? Please be very precise formulating the rules, considering such edge cases. – Thorsten Kettner Commented Feb 4 at 7:41
  • 1 Interesting problem, though there can be many ways of how to cope with corner cases and precise specification is your (rather than our) task, Saqib. Tim's count trick is fine and simple. Personally I think some elections algorithm such as d'Hondt method could be applied too. – Tomáš Záluský Commented Feb 4 at 9:24
  • This is an algorithm question. IMHO it's a better fit for softwareengineering.stackexchange – Jan Doggen Commented Feb 7 at 12:55
  • @JanDoggen Which programming Stack Exchange sites do I post on? "Software Engineering If your question is directly related to the Systems Development Life Cycle (except for troubleshooting, writing or explaining specific code), you can ask it on Software Engineering" - this does sound like a question about writing code. The threads over there don't seem to discuss much code. – Zegarek Commented Feb 7 at 13:25
Add a comment  | 

3 Answers 3

Reset to default 2
select*from population
order by row_number()over(partition by stratum)
limit 4
offset 0;
stratum
c
b
a
c

demo at db<>fiddle

  1. Establish member numbers within each stratum using row_number().
  2. ORDER BY that.
  3. Use LIMIT to cut off your sample.
  4. Increase OFFSET to progress through samples.

You can use different pagination methods to progress through consecutive, non-overlapping samples of your population. LIMIT..OFFSET isn't the best, but it's the simplest.

Once it sampled from each group, it picks another member however Postgres finds it quickest. If you want to instead force it to pick them alphabetically (get b instead of c as the fourth member drafted to this sample), add another order by item accordingly as shown in the demo.

To later order the whole extracted sample, you can wrap it in a subquery or a CTE and add another order by outside so that it sorts the result without affecting how members are sampled.


There are also built-in random sampling methods you can specify with tablesample clause:

select*from population
tablesample system(50)repeatable(.42)
limit 4;

But those don't operate on data-level strata.

  • TABLESAMPLE SYSTEM uses pages. 50 means every page of the table has 50% chance of being drafted. The number of live records on a page isn't constant. This typically gets you neighbouring rows that got inserted together/consecutively. You need to know the total row count of the table and adjust that percentage to it in order to arrive at a specific sample size. You also still need a limit clause on top, because the exact sample size you'll get is based entirely on probability.
  • TABLESAMPLE BERNOULLI uses records. With 50, every record of every page has 50% chance. Again, needs to be combined with total row count and trimmed with limit to arrive at a specific sample size.
  • TABLESAMPLE SYSTEM_TIME from tsm_system_time is TABLESAMPLE SYSTEM but instead of accepting a target sample %, it takes a time limit. It just keeps drafting until it runs out of time.
  • TABLESAMPLE SYSTEM_ROWS from tsm_system_rows is like TABLESAMPLE SYSTEM with LIMIT applied during sampling - it'll begin drafting page by page until it collects the target sample size.

We can use a count trick here, with the help of window functions:

WITH cte AS (
    SELECT t.*, COUNT(*) OVER (PARTITION BY col1) cnt,
                ROW_NUMBER() OVER (PARTITION BY col1 ORDER BY col1) rn
    FROM yourTable t
)

SELECT col1
FROM cte
WHERE 1.0*rn/cnt <= (4.0 / (SELECT COUNT(*) FROM yourTable))
ORDER BY col1;

The idea is to sequentially number every value, and then retain only a certain percentage.

You can use the NTILE window function to define the number of buckets (or tiles) you want, and then use ROW_NUMBER() to define the first of the group, and then filter on that:

select col
from (
  select col, tile, row_number() over(partition by tile order by col) as rownr
  from (
    select col, ntile(4) over (order by col) as tile
    from (values ('a'), ('b'), ('b'), ('c'), ('c'), ('c'), ('c')) as a(col)
  ) b
) c
where rownr = 1

See also this dbfiddle.

For this specific example, you can also use the MIN aggregate function instead of ROW_NUMBER():

select min(col) as col
from (
  select col, ntile(4) over (order by col) as tile
  from (values ('a'), ('b'), ('b'), ('c'), ('c'), ('c'), ('c')) as a(col)
) b
group by tile
order by 1

See also this dbfiddle.

However, the first solution is in my opinion more generally useful.

发布评论

评论列表(0)

  1. 暂无评论