Intuition: our interviews generate a lot of session data. Can we reduce the size somehow, by optimizing our largest in-session store datastructures?
[This is sparked by how big our database dumps are getting, which leads to trouble with migrating to hosted databases as well]
One I know is bad is the MACourts repo.
Any other designs to revisit?
Potentially look at more modularity - making more of the different interview pieces we offer optional. Note that it's not about separate YAMLs as much as smaller code blocks, so not everything runs every time.
Strategy: look at a random subset of real interview sessions at the database table layer to see what really is making the database size grow the most.
Intuition: our interviews generate a lot of session data. Can we reduce the size somehow, by optimizing our largest in-session store datastructures?
[This is sparked by how big our database dumps are getting, which leads to trouble with migrating to hosted databases as well]
One I know is bad is the MACourts repo.
Any other designs to revisit?
Potentially look at more modularity - making more of the different interview pieces we offer optional. Note that it's not about separate YAMLs as much as smaller code blocks, so not everything runs every time.
Strategy: look at a random subset of real interview sessions at the database table layer to see what really is making the database size grow the most.