4

I have a dataframe that contains "duplicated" data in all columns but one called source. I match these records one to one per source into groups. Example data for such dataframe:

id,str_id,partition_number,source,type,state,quantity,price,m_group,m_status
1,s1_1,111,1,A,1,10,100.0,,0
2,s1_2,111,1,A,1,10,100.0,,0
3,s1_3,222,1,B,2,20,150.0,,0
4,s1_4,333,1,C,1,30,200.0,,0
5,s1_5,111,1,A,1,10,100.0,,0
6,s1_6,111,1,A,1,10,100.0,,0
7,s2_1,111,5,A,1,10,100.0,,0
8,s2_2,111,5,A,1,10,100.0,,0
9,s2_3,111,5,A,1,10,100.0,,0
10,s2_4,222,5,B,2,20,150.0,,0
11,s2_5,444,5,D,1,40,250.0,,0
12,s3_1,111,6,A,1,10,100.0,,0
13,s3_2,111,6,A,1,10,100.0,,0
14,s3_3,111,6,A,1,10,100.0,,0
15,s3_4,222,6,B,2,20,150.0,,0
16,s3_5,444,6,D,1,40,250.0,,0
17,s3_6,333,6,C,1,30,200.0,,0

Loaded into dataframe:

┌─────┬──────────┬──────────┬──────────┬────────┬──────┬──────────┬──────────┬──────────┬──────────┐
│ id  ┆ str_id   ┆ part_    ┆ source   ┆ type   ┆ stat ┆ quantity ┆ price    ┆ m_group  ┆ m_status │
│ --- ┆          ┆ number   ┆          ┆ ---    ┆ ---  ┆ ---      ┆ ---      ┆          ┆          │ 
│ i64 ┆ ---      ┆ ---      ┆ ---      ┆ str    ┆ i64  ┆ i64      ┆ f64      ┆ ---      ┆ ---      │
│     ┆ str      ┆ str      ┆ i64      ┆        ┆      ┆          ┆          ┆         ┆       │
╞═════╪══════════╪══════════╪══════════╪══════╪══════╪══════════╪═════════╪═════════╪══════════╡
│ 1   ┆ s1_1     ┆ 111      ┆ 1        ┆ A      ┆ 1    ┆ 10.      ┆ 100.0000 ┆ []     ┆ []        │
│     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
│ 2   ┆ s1_2     ┆ 111      ┆ 1        ┆ A      ┆ 1    ┆ 10.      ┆ 100.0000 ┆ []     ┆ []        │
│     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
│ 3   ┆ s1_3     ┆ 222      ┆ 1        ┆ B      ┆ 2    ┆ 20.      ┆ 150.0000 ┆ []     ┆ []        │
│     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
│ 4   ┆ s1_4     ┆ 333      ┆ 1        ┆ C      ┆ 1    ┆ 30.      ┆ 200.0000 ┆ []     ┆ []        │
│     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
│ 5   ┆ s1_5     ┆ 111      ┆ 1        ┆ A      ┆ 1    ┆ 10.      ┆ 100.0000 ┆ []     ┆ []        │
│     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
│ 6   ┆ s1_6     ┆ 111      ┆ 1        ┆ A      ┆ 1    ┆ 10.      ┆ 100.0000 ┆ []     ┆ []        │
│     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
│ 7   ┆ s2_1     ┆ 111      ┆ 5        ┆ A      ┆ 1    ┆ 10.      ┆ 100.0000 ┆ []     ┆ []        │
│     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
│ 8   ┆ s2_2     ┆ 111      ┆ 5        ┆ A      ┆ 1    ┆ 10.      ┆ 100.0000 ┆ []     ┆ []        │
│     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
│ 9   ┆ s2_3     ┆ 111      ┆ 5        ┆ A      ┆ 1    ┆ 10.      ┆ 100.0000 ┆ []     ┆ []        │
│     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
│ 10  ┆ s2_4     ┆ 222      ┆ 5        ┆ B      ┆ 2    ┆ 20.      ┆ 150.0000 ┆ []     ┆ []        │
│     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
│ 11  ┆ s2_5     ┆ 444      ┆ 5        ┆ D      ┆ 1    ┆ 40.      ┆ 250.0000 ┆ []     ┆ []        │
│     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
│ 12  ┆ s3_1     ┆ 111      ┆ 6        ┆ A      ┆ 1    ┆ 10.      ┆ 100.0000 ┆ []     ┆ []        │
│     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
│ 13  ┆ s3_2     ┆ 111      ┆ 6        ┆ A      ┆ 1    ┆ 10.      ┆ 100.0000 ┆ []     ┆ []        │
│     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
│ 14  ┆ s3_3     ┆ 111      ┆ 6        ┆ A      ┆ 1    ┆ 10.      ┆ 100.0000 ┆ []     ┆ []        │
│     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
│ 15  ┆ s3_4     ┆ 222      ┆ 6        ┆ B      ┆ 2    ┆ 20.      ┆ 150.0000 ┆ []     ┆ []        │
│     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
│ 16  ┆ s3_5     ┆ 444      ┆ 6        ┆ D      ┆ 1    ┆ 40.      ┆ 250.0000 ┆ []     ┆ []        │
│     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
│ 17  ┆ s3_6     ┆ 333      ┆ 6        ┆ C      ┆ 1    ┆ 30.      ┆ 200.0000 ┆ []     ┆ []        │
│     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
└─────┴──────────┴──────────┴──────────┴────────┴──────┴──────────┴──────────┴──────────┴──────────┘

After I match these, I have an output dataframe that contains three columns of [list] type that aggregete the ids, str_ids and sources into groups of "duplicated" records:

┌─────────────┬──────────────────────────┬────────────────┐
│ id          ┆ str_id                   ┆ source         │
│ ---         ┆ ---                      ┆ ---            │
│ list[i64]   ┆ list[str]                ┆ list[i64]      │
╞═════════════╪══════════════════════════╪════════════════╡
│ [5, 9, 14]  ┆ ["s1_5", "s2_3", "s3_3"] ┆ [1, 5, 6]      │
│ [2, 8, 13]  ┆ ["s1_2", "s2_2", "s3_2"] ┆ [1, 5, 6]      │
│ [6]         ┆ ["s1_6"]                 ┆ [1]            │
│ [3, 10, 15] ┆ ["s1_3", "s2_4", "s3_4"] ┆ [1, 5, 6]      │
│ [1, 7, 12]  ┆ ["s1_1", "s2_1", "s3_1"] ┆ [1, 5, 6]      │
│ [11, 16]    ┆ ["s2_5", "s3_5"]         ┆ [5, 6]         │
│ [4, 17]     ┆ ["s1_4", "s3_6"]         ┆ [1, 6]         │
└─────────────┴──────────────────────────┴────────────────┘

What's the most optimal way to either:

  1. update the values for m_status columns in original dataframe, for example, for every record that has a group of size at least 2, set the value of m_status to values of opposing sources if source == 1, else set the value of m_status to value of 1 if there is source 1 in the group.

    so the outcome would be:

    
    ┌─────┬──────────┬──────────┬──────────┬────────┬──────┬──────────┬──────────┬──────────┬──────────┐
    │ id  ┆ str_id   ┆ part_    ┆ source   ┆ type   ┆ stat ┆ quantity ┆ price    ┆ m_group  ┆ m_status │
    │ --- ┆          ┆ number   ┆          ┆ ---    ┆ ---  ┆ ---      ┆ ---      ┆          ┆          │ 
    │ i64 ┆ ---      ┆ ---      ┆ ---      ┆ str    ┆ i64  ┆ i64      ┆ f64      ┆ ---      ┆ ---      │
    │     ┆ str      ┆ str      ┆ i64      ┆        ┆      ┆          ┆          ┆         ┆       │
    ╞═════╪══════════╪══════════╪══════════╪══════╪══════╪══════════╪═════════╪═════════╪══════════╡
    │ 1   ┆ s1_1     ┆ 111      ┆ 1        ┆ A      ┆ 1    ┆ 10.      ┆ 100.0000 ┆ []     ┆ [5,6]        │
    │     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
    │ 2   ┆ s1_2     ┆ 111      ┆ 1        ┆ A      ┆ 1    ┆ 10.      ┆ 100.0000 ┆ []     ┆ [5,6]        │
    │     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
    │ 3   ┆ s1_3     ┆ 222      ┆ 1        ┆ B      ┆ 2    ┆ 20.      ┆ 150.0000 ┆ []     ┆ [5,6]        │
    │     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
    │ 4   ┆ s1_4     ┆ 333      ┆ 1        ┆ C      ┆ 1    ┆ 30.      ┆ 200.0000 ┆ []     ┆ [6]        │
    │     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
    │ 5   ┆ s1_5     ┆ 111      ┆ 1        ┆ A      ┆ 1    ┆ 10.      ┆ 100.0000 ┆ []     ┆ [5,6]        │
    │     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
    │ 6   ┆ s1_6     ┆ 111      ┆ 1        ┆ A      ┆ 1    ┆ 10.      ┆ 100.0000 ┆ []     ┆ []        │
    │     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
    │ 7   ┆ s2_1     ┆ 111      ┆ 5        ┆ A      ┆ 1    ┆ 10.      ┆ 100.0000 ┆ []     ┆ [1]        │
    │     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
    │ 8   ┆ s2_2     ┆ 111      ┆ 5        ┆ A      ┆ 1    ┆ 10.      ┆ 100.0000 ┆ []     ┆ [1]        │
    │     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
    │ 9   ┆ s2_3     ┆ 111      ┆ 5        ┆ A      ┆ 1    ┆ 10.      ┆ 100.0000 ┆ []     ┆ [1]        │
    │     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
    │ 10  ┆ s2_4     ┆ 222      ┆ 5        ┆ B      ┆ 2    ┆ 20.      ┆ 150.0000 ┆ []     ┆ [1]        │
    │     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
    │ 11  ┆ s2_5     ┆ 444      ┆ 5        ┆ D      ┆ 1    ┆ 40.      ┆ 250.0000 ┆ []     ┆ []        │
    │     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
    │ 12  ┆ s3_1     ┆ 111      ┆ 6        ┆ A      ┆ 1    ┆ 10.      ┆ 100.0000 ┆ []     ┆ [1]        │
    │     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
    │ 13  ┆ s3_2     ┆ 111      ┆ 6        ┆ A      ┆ 1    ┆ 10.      ┆ 100.0000 ┆ []     ┆ [1]        │
    │     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
    │ 14  ┆ s3_3     ┆ 111      ┆ 6        ┆ A      ┆ 1    ┆ 10.      ┆ 100.0000 ┆ []     ┆ [1]        │
    │     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
    │ 15  ┆ s3_4     ┆ 222      ┆ 6        ┆ B      ┆ 2    ┆ 20.      ┆ 150.0000 ┆ []     ┆ [1]        │
    │     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
    │ 16  ┆ s3_5     ┆ 444      ┆ 6        ┆ D      ┆ 1    ┆ 40.      ┆ 250.0000 ┆ []     ┆ []        │
    │     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
    │ 17  ┆ s3_6     ┆ 333      ┆ 6        ┆ C      ┆ 1    ┆ 30.      ┆ 200.0000 ┆ []     ┆ [1]        │
    │     ┆          ┆          ┆          ┆        ┆      ┆          ┆      000 ┆        ┆          │
    └─────┴──────────┴──────────┴──────────┴────────┴──────┴──────────┴──────────┴──────────┴──────────┘
    
  2. create a completely new dataframe (can be in a different order) that contains the ids, str_ids and m_status in the same way as above. This way I wouldn't have to a lookup to original dataframe (but if I have ids it should not be expensive) and could just iterate to create a new one.

My solution so far:


df_out = df_out.select("id", "str_id", "source")
m_status_mapping = {}
for ids, str_ids, sources in df_out.iter_rows():
    for i, id_ in enumerate(ids):
        opposite_sources = [str(rep) for j, s in enumerate(sources) if j != i]
        m_status_mapping[id_] = ','.join(opposite_sources)

df = df_original.with_columns(
    pl.col("id").replace(m_status_mapping).alias("m_status")
)
df = df.with_columns(pl.col("m_status").str.split(","))
df.select("id", "str_id", "m_status")

Which results in following output:

id  str_id  m_status
i64 str     list[str]
1   "s1_1"  ["5", "6"]
2   "s1_2"  ["5", "6"]
3   "s1_3"  ["5", "6"]
4   "s1_4"  ["6"]
5   "s1_5"  ["5", "6"]
6   "s1_6"  [""]
7   "s2_1"  ["1", "6"]
8   "s2_2"  ["1", "6"]
9   "s2_3"  ["1", "6"]
10  "s2_4"  ["1", "6"]
11  "s2_5"  ["6"]
12  "s3_1"  ["1", "5"]
13  "s3_2"  ["1", "5"]
14  "s3_3"  ["1", "5"]
15  "s3_4"  ["1", "5"]
16  "s3_5"  ["5"]
17  "s3_6"  ["1"]

It almost works, I get too many sources in m_status for rows with source != 1. Also it's probably terrible efficiency-wise, there must be a much better way to do this.

1 Answer 1

1

Using dataframe with aggregated duplicate records:

(
    df.with_columns(
        l = pl.col.source.len(),
        has1 = pl.col.source.list.contains(1),
        excl1 = pl.col.source.list.set_difference([1])
    ).explode(pl.col("id","str_id","source"))
    .select(
        pl.col("id","str_id","source"),
        m_status = 
        pl.when(pl.col.l >= 2, pl.col.source == 1).then(pl.col.excl1)
        .when(pl.col.l >= 2, pl.col.has1).then([1])
        .otherwise([])        
    )
    .sort("id")
)

┌─────┬────────┬────────┬───────────┐
│ id  ┆ str_id ┆ source ┆ m_status  │
│ --- ┆ ---    ┆ ---    ┆ ---       │
│ i64 ┆ str    ┆ i64    ┆ list[i64] │
╞═════╪════════╪════════╪═══════════╡
│ 1   ┆ s1_1   ┆ 1      ┆ [6, 5]    │
│ 2   ┆ s1_2   ┆ 1      ┆ [6, 5]    │
│ 3   ┆ s1_3   ┆ 1      ┆ [6, 5]    │
│ 4   ┆ s1_4   ┆ 1      ┆ [6]       │
│ 5   ┆ s1_5   ┆ 1      ┆ [6, 5]    │
│ 6   ┆ s1_6   ┆ 1      ┆ []        │
│ 7   ┆ s2_1   ┆ 5      ┆ [1]       │
│ 8   ┆ s2_2   ┆ 5      ┆ [1]       │
│ 9   ┆ s2_3   ┆ 5      ┆ [1]       │
│ 10  ┆ s2_4   ┆ 5      ┆ [1]       │
│ 11  ┆ s2_5   ┆ 5      ┆ []        │
│ 12  ┆ s3_1   ┆ 6      ┆ [1]       │
│ 13  ┆ s3_2   ┆ 6      ┆ [1]       │
│ 14  ┆ s3_3   ┆ 6      ┆ [1]       │
│ 15  ┆ s3_4   ┆ 6      ┆ [1]       │
│ 16  ┆ s3_5   ┆ 6      ┆ []        │
│ 17  ┆ s3_6   ┆ 6      ┆ [1]       │
└─────┴────────┴────────┴───────────┘

Just an addition, this is how you can aggregate "duplicate" records

(
    df
    .with_columns(i = pl.int_range(pl.len()).over("source","partition_number"))
    .group_by("i","partition_number", maintain_order=True)
    .agg("id","str_id","source")
    .drop("i","partition_number")
)

┌─────────────┬──────────────────────────┬───────────┐
│ id          ┆ str_id                   ┆ source    │
│ ---         ┆ ---                      ┆ ---       │
│ list[i64]   ┆ list[str]                ┆ list[i64] │
╞═════════════╪══════════════════════════╪═══════════╡
│ [1, 7, 12]  ┆ ["s1_1", "s2_1", "s3_1"] ┆ [1, 5, 6] │
│ [2, 8, 13]  ┆ ["s1_2", "s2_2", "s3_2"] ┆ [1, 5, 6] │
│ [3, 10, 15] ┆ ["s1_3", "s2_4", "s3_4"] ┆ [1, 5, 6] │
│ [4, 17]     ┆ ["s1_4", "s3_6"]         ┆ [1, 6]    │
│ [5, 9, 14]  ┆ ["s1_5", "s2_3", "s3_3"] ┆ [1, 5, 6] │
│ [6]         ┆ ["s1_6"]                 ┆ [1]       │
│ [11, 16]    ┆ ["s2_5", "s3_5"]         ┆ [5, 6]    │
└─────────────┴──────────────────────────┴───────────┘

Using this aggregation, you can also calculate m_status over groups, without aggregating

Something like this:

(
    df
    .with_columns(i = pl.int_range(pl.len()).over("source","partition_number"))
    .with_columns(
        l = pl.len().over("partition_number","i"),
        has1 = (pl.col.source == 1).any().over("partition_number","i"),
        excl1 = pl.col.source.filter(pl.col.source != 1).over("partition_number","i", mapping_strategy="join")
    )
    .select(
        pl.col("id","str_id","source"),
        m_status = 
        pl.when(pl.col.l >= 2, pl.col.source == 1).then(pl.col.excl1)
        .when(pl.col.l >= 2, pl.col.has1).then([1])
        .otherwise([])        
    )    
)

┌─────┬────────┬────────┬───────────┐
│ id  ┆ str_id ┆ source ┆ m_status  │
│ --- ┆ ---    ┆ ---    ┆ ---       │
│ i64 ┆ str    ┆ i64    ┆ list[i64] │
╞═════╪════════╪════════╪═══════════╡
│ 1   ┆ s1_1   ┆ 1      ┆ [5, 6]    │
│ 2   ┆ s1_2   ┆ 1      ┆ [5, 6]    │
│ 3   ┆ s1_3   ┆ 1      ┆ [5, 6]    │
│ 4   ┆ s1_4   ┆ 1      ┆ [6]       │
│ 5   ┆ s1_5   ┆ 1      ┆ [5, 6]    │
│ 6   ┆ s1_6   ┆ 1      ┆ []        │
│ 7   ┆ s2_1   ┆ 5      ┆ [1]       │
│ 8   ┆ s2_2   ┆ 5      ┆ [1]       │
│ 9   ┆ s2_3   ┆ 5      ┆ [1]       │
│ 10  ┆ s2_4   ┆ 5      ┆ [1]       │
│ 11  ┆ s2_5   ┆ 5      ┆ []        │
│ 12  ┆ s3_1   ┆ 6      ┆ [1]       │
│ 13  ┆ s3_2   ┆ 6      ┆ [1]       │
│ 14  ┆ s3_3   ┆ 6      ┆ [1]       │
│ 15  ┆ s3_4   ┆ 6      ┆ [1]       │
│ 16  ┆ s3_5   ┆ 6      ┆ []        │
│ 17  ┆ s3_6   ┆ 6      ┆ [1]       │
└─────┴────────┴────────┴───────────┘

Not the answer you're looking for? Browse other questions tagged or ask your own question.