My company is AB testing a bunch of ad campaigns. I put together some workbooks to make it easier to track performance but I'm not able to accurately display statistical significance. I can add in the table calculations of Difference and % of Difference between performance indicators but I can't figure out how to flag if it's a significant difference or not and display amount of significance.
In the attached example I have 2 different AB tests running. Each AB test has 2 ad campaigns, a control and a test. We track if the ad was clicked (Ad Click), and if the user eventually purchases the product (Ad Conversion). We look at Ad C2C% (click to conversion rate) to evaluate ad performance. I included 2 different AB tests because this is how it will display when being used by general users. They will have the option to filter for different AB tests, but they need to be able to have more than 1 displayed at a time. This is because we test our ads in an ad-to-ad format, so we might have 10 ads that are identical but are on different pages of our website. Each one of those ads is tracked individually, even though the same change is being tested across all of them. The end users will want to see how a specific ad performed in a specific placement, and then will select all AB tests that are running this type of test to see how it performs in all placements on aggregate.
I would greatly appreciate any help! Thanks