Experiment rules
backend.experiment.rules.base#
BaseRules
#
Bases: object
Base class for other rules classes
Source code in backend/experiment/rules/base.py
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 | |
calculate_intermediate_score(session, result)
#
process result data during a trial (i.e., between next_round calls). This is only used in the matching_pairs rules files so far.
Override this in your rules file to control what value should be returned when frontend calls session/intermediate_score endpoint.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
session
|
Session
|
current session |
required |
result
|
Result
|
result to be evaluated |
required |
Returns:
| Type | Description |
|---|---|
int
|
the score of the result |
Source code in backend/experiment/rules/base.py
calculate_score(result, data)
#
Use scoring rule to calculate score.
The function uses the result’s scoring rule, if configured, otherwise, returns None.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
result
|
Result
|
the Result object for which to calculate the score |
required |
data
|
dict
|
the data of the participant’s response |
required |
Source code in backend/experiment/rules/base.py
feedback_info()
#
Return info to shown to the user if they are invited to give feedback
Source code in backend/experiment/rules/base.py
final_score_message(session)
#
Create final score message for given session, base on score per result Override this to display different text on the final screen.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
session
|
Session
|
the current session |
required |
Returns:
| Type | Description |
|---|---|
str
|
a string with feedback for the participant based on their score |
Source code in backend/experiment/rules/base.py
get_experiment_url(session)
#
return the experiment url. Defaults to experiment.slug
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
session
|
Session
|
current session |
required |
Source code in backend/experiment/rules/base.py
get_play_again_url(session)
#
Get the url to play the experiment again
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
session
|
Session
|
current session |
required |
Source code in backend/experiment/rules/base.py
get_profile_question_trials(session, n_questions=1)
#
Get a list of trials for questions not yet answered by the user
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
session
|
Session
|
the current session |
required |
n_questions
|
int
|
the number of questions to return, set to |
1
|
Returns:
| Type | Description |
|---|---|
list[Trial]
|
list of |
Source code in backend/experiment/rules/base.py
has_played_before(session)
#
Check if the current participant has completed this game previously.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
session
|
Session
|
current session |
required |
Returns:
| Type | Description |
|---|---|
bool
|
boolean indicating whether the current participant has finished a session of this game |
Source code in backend/experiment/rules/base.py
rank(session, exclude_unfinished=True)
#
Get rank based on session score, based on the participant’s percentile rank Override this function in your rules file to change rank calculation
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
session
|
Session
|
the current session |
required |
exclude_unfinished
|
bool
|
whether unfinished sessions should be excluded when calculating rank |
True
|
Returns:
| Type | Description |
|---|---|
str
|
a string indicating the rank of the participant (e.g., “bronze”) |
Source code in backend/experiment/rules/base.py
validate_playlist(playlist=None)
#
Validate a playlist associated with this rules file, e.g., ensure that files have a specific name format
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
playlist
|
Playlist
|
playlist to be checked |
None
|
Returns:
| Type | Description |
|---|---|
list[str]
|
an array of error messages. If return value is an empty list, validation succeeded. |
Source code in backend/experiment/rules/base.py
backend.experiment.rules.practice#
PracticeMixin
#
Bases: object
PracticeMixin can be used to present a trial a given number of times. After these practice trials, it tests whether the partcipant performed well enough to proceed.
Extend this class in your ruleset if you need a practice run for your participants.
Note that you could use this class to - create rules for a self-contained block with only the practice run, and define the experiment proper in another rules file; - create rules which include the experiment proper after the practice phase.
This practice class is now written towards 2 alternative forced choice rulesets, but may be extended in the future.
Attributes:
| Name | Type | Description |
|---|---|---|
task_description |
str
|
will appear in the title of the experiment |
first_condition |
str
|
the first condition that trials may have (e.g., lower pitch) |
first_condition_i18n |
str
|
the way the condition will appear to participants, can be translated if you use _() around the string |
second_condition |
str
|
the second condition that trials may have (e.g., higher pitch) |
second_condition_i18n |
str
|
the way the condition will appear to participants, can be translated if you use _() around the string |
n_practice_rounds |
int
|
adjust to the number of practice rounds that should be presented |
n_practice_rounds_second_condition |
int
|
how often the second condition appears in the practice rounds, e.g., one “catch” trial, or half the practice trials |
n_correct |
int
|
how many answers of the participant need to be correct to proceed |
Example
This is an example of a rules file which would only present the practice run to the participant:
class MyPracticeRun(BaseRules, PracticeMixin):
task_description = ""
first_condition = 'lower'
first_condition_i18n = _("LOWER")
second_condition = 'higher'
second_condition_i18n = _("HIGHER")
n_practice_rounds = 10
n_practice_rounds_second_condition = 5
n_correct = 3
def next_round(self, session):
return self.next_practice_round(session)
duration_discrimination.py rules file. This implements the experiment proper after the practice run.
Source code in backend/experiment/rules/practice.py
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 | |
finalize_practice(session)
#
Finalize practice: set {"practice_done": True} in session.json_data
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
session
|
Session
|
the Session object, as supplied to the |
required |
Source code in backend/experiment/rules/practice.py
get_condition(session)
#
Keep track of the conditions presented in the practice phase through the session.json_data.
In the default implementation, it will generate n_practice_rounds conditions, with n_second_condition times the second condition,
and n_practice_rounds - n_second_condition times the first condition, shuffle these randomly,
and then present one condition each round.
Override this method if you need a different setup.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
session
|
Session
|
the Session object, as supplied to the |
required |
Source code in backend/experiment/rules/practice.py
get_condition_and_correctness(session)
#
Checks whether the condition of the last Trial, and whether the response of the participant was correct.
This method is called from get_feedback_explainer.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
session
|
Session
|
Session object, as supplied to the |
required |
Returns:
| Type | Description |
|---|---|
Tuple[str, bool]
|
a tuple of the last trial’s condition, and whether it was answered correctly |
Source code in backend/experiment/rules/practice.py
get_continuation_explainer()
#
Override this explainer if you want to give extra information to the participant before the actual test phase starts. Returns: Explainer object
Source code in backend/experiment/rules/practice.py
get_feedback_explainer(session)
#
Override this explainer if you need to give different feedback to participants about whether or not they answered correctly.
Returns:
| Type | Description |
|---|---|
Explainer
|
Explainer object |
Source code in backend/experiment/rules/practice.py
get_intro_explainer()
#
Override this method to explain the procedure of the current block to your participants.
Returns:
| Type | Description |
|---|---|
Explainer
|
Explainer object |
Source code in backend/experiment/rules/practice.py
get_next_trial(session)
#
Provide the next trial action
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
session
|
Session
|
the Session object, as supplied to the |
required |
Returns:
| Type | Description |
|---|---|
Trial
|
Trial object |
Source code in backend/experiment/rules/practice.py
get_practice_explainer()
#
Override this method if you want to give extra information about the practice itself.
Returns:
| Type | Description |
|---|---|
Explainer
|
Explainer object |
Source code in backend/experiment/rules/practice.py
get_restart_explainer()
#
Override this method if you want to adjust the feedback to why participants need to practice again.
Returns:
| Type | Description |
|---|---|
Explainer
|
Explainer object |
Source code in backend/experiment/rules/practice.py
next_practice_round(session)
#
This method implements the logic for presenting explainers, practice rounds, and checking after the practice rounds if the participant was successful.
-
if so: proceed to the next stage of the experiment.
session.json_datawill have set{'practice_done': True}, which you can check for in yournext_roundlogic. -
if not: delete all results so far, and restart the practice.
You can call this method from your ruleset’s next_round function.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
session
|
Session
|
the Session object, as also supplied to |
required |
Returns:
| Type | Description |
|---|---|
list[Union[Trial, Explainer]]
|
list of Trial and/or Explainer objects |
Source code in backend/experiment/rules/practice.py
practice_successful(session)
#
Checks if the practice is correct, i.e., that at the participant gave at least n_correct correct responses.
Override this method if you need different logic.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
session
|
Session
|
the Session object, as supplied to the |
required |
Returns:
| Type | Description |
|---|---|
bool
|
a boolean indicating whether or not the practice was successful |
Source code in backend/experiment/rules/practice.py
backend.experiment.rules.staircasing#
register_turnpoint(session, last_result)
#
register turnpoint: - set comment on previous result to indicate turnpoint - increase final_score (used as counter for turnpoints)