I want to compare the daily average revenue of a promotion period (7 days) of a business with the daily average of the rest of the year.
So, sample 1 has 7 data points, whereas sample 2 has 300 data points (300 days). Sample 1 is a small sample and sample 2 is a right-skewed distribution because of seasonality.
My goal is to create an evaluation method of the sample 1 average, based on sample 2 average, for which:
- If sample 1 average is between sample 2 average -+ 1 deviation it went okay.
- If <1 deviation, it went poorly.
- If >1 deviation it went well.
So far, I tried comparing the mean of sample 1 with the mean of sample 2 -+1 standard deviation and comparing the median of sample 1 with the median of sample 2 -+1 median absolute deviation. I also tried to bootstrap sample 1 and use the central limit theorem for sample 2. The CLT method doesn't reflect reality. The first two methods might be kinda good, but I don't feel the evaluation is precise.
As I want to compare the performance of the avg revenue of a sample of just 7 days with a highly right skewed distribution, is mean and standard deviation or median and mad good?
Is there a better alternative?