Executing your Experiment
Now that the experiment and application are set up, we can begin the experiment.
Let us go back to your experiment. In the developer portal, open the experiment we created in Defining your Experiment.
We can see that the experiment is currently in the "Draft" state because the A/B test has not yet started.
The buttons at the button of the form perform the following actions:
- Click the "Edit" button to update experiment details.
- Click the "Clone" button to create a new experiment based on the current one. It will begin in the "Draft" state.
- Click the "Delete" button to discard the experiment.
- Click the "Start" button to begin the experiment.
For now, click the "Start" button. A confirmation dialog will pop up. Select "Yes" to start the experiment.
You will see that the status indicator changes to "Running". The experiment is now running and Kii Cloud is ready to receive view and conversion events from your application.
Once the experiment has started, the Results section will be updated daily showing the current A/B testing results, as in the following sample screenshot.
The results include the following information:
- The experiment name.
- When the experiment was started (and ended if it has finished).
The current results of the experiment (with some graphs).
- View Events: The name of the view event.
- View Count: The number of view events reported for each variation.
- Conversion: The name of the conversion event.
- Count: The number of conversion events (i.e. the "eventClicked" event in our example scenario) reported for each variation.
- Rate: The conversion rate (i.e Count divided by View). The results are statistically analyzed to show you the conversion rate, with a +/- range (if applicable).
- Change: The rate of conversion rate gain/loss relative to Variation A, that is:
Variation B conversion rate-
Variation A conversion rate) /
Variation A conversion rate
- Confidence: The confidence measure shows the suitability of the presented conversion rates for making a decision. If the confidence is 95% or higher, a check mark will be displayed indicating that you have enough results to draw a conclusion. We need at least 1000 views before a reliable confidence calculation can be made ("N/A" will be shown if we do not have enough views).
In our example, a check mark is displayed next to the confidence level indicating that we can make a decision based on the results. We can confidently conclude that the conversion rate of Variation B is higher than that of Variation A (with a +16.42% conversion improvement).
By clicking the "Pause" button at the bottom of the Results screen, you can pause the experiment.
The status indicator changes to "Paused" when you pause the experiment. Your application will not be able to load the variation in this state, so you should handle this and apply other variable values (in most cases falling back to the default values is sufficient). For more details, please see these code samples.
When you later wish to resume the experiment, click the "Restart" button. This will activate your experiment and resume accepting view and conversion events.
When you think the testing is sufficient and you wish to choose a variation to implement in your application, click the "Stop" button. This will show you the following dialog.
You have three choices when ending your experiment.
- Variation A: Select Variation A.
- Variation B: Select Variation B.
- No change: Select neither option and let the application locally apply values.
When you select either Variation A or B and end the experiment, the client SDK will always return the variable values of the selected variation.
If you select "No Change", however, the client SDK will now throw an exception. Your application should catch this exception and take the appropriate action (e.g. apply default values). For more details, please see the description here.