Who can see your viewing activity?
do you allow the change of carrying capacity during radiation therapy?
What it would take for the model to be deployed in clinical uses ?
Hi Kevin. So, based on our discussions with our radiation oncology colleagues the biggest hurdle is external validation - i.e. we need to validate our fits, forecasts, etc. on a completely independent cohort. Next step would be validation in clinical trials.
Thank you !
Really nice work! It looks from the results (not shown above) that the standard of care is close to the optimal non-personalised dose, assuming it's better to give a bit too much than a bit too little. Is that right? If so, do you plan to adjust your recommended doses to err on the side of caution?
(Ignore the “not shown above” - that was meant for twitter before I decided to post here instead)
Thanks Rob! I don’t know if I’d agree with that. From our simulations 28/39 patients could safely be de-escalated without loss of tumor control. We’ve actually built in a lot of caution/safety checks by limiting how low of a dose we would recommend (min of 54 Gy). Interestingly, there are actually trials based on PET hypoxia imaging that test de-escalating to as little as 30 Gy! So I think our recommendations are already fairly cautious :)
One thing I didn’t note is that the majority of the patients we’re looking at are HPV+, and there is evidence to show that HPV+ patients are much more radiosensitive than the HPV- ones, and the original 66-70 Gy was arrived at when HPV- patients were more prevalent. So there’s biological/clinical reasons to expect that our de-escalation recommendations make sense.
@Linus: Would it make sense to include competition between the mutant HSCs? And also the function you’re fitting to the longitudinal data, does it include the expected saturation at 0.5?
yes it includes the expected saturation at 0.5, assuming no loss heterozyogisity.
explicit competition is definitely of interest, it would likely also result in logistic curves but interesting to see whether the details (eg fluctuations) differ to the extent that we could tell from the dta
@Mohammad: Aha! So that can explain why you find more cases for de-escalation than escalation. To put my question another way, though: Suppose you simulate your model many times, each time drawing parameter values from your posterior distributions, then evaluate the average outcome across the entire cohort. The cohort outcome won't necessarily be optimal, even if the individual patient outcomes are optimal. Perhaps you could improve the cohort outcome by adjusting doses slightly in one direction or the other.
@Rob: Interesting idea. Worth exploring. The other thing is we need to incorporate some sort of measure of toxicity, which is what we’re actually trying to minimize - we don’t have a direct measure right now, so we just assume that minimizing cumulative dose will be a good proxy. So, our “optimal” is high control + low toxicity.
Yes, the difficult bit is figuring out how to define “average outcome”.
True, and this gets to why we were interested in simulating a cohort level trial, but you raise a good point - I should probably show some sort of cohort-level assessment of outcomes, as one might see in a real trial.
How did you decide on the position of the initial injection site for the tumour?
Given the scarcity of data, how did you decide which parameters to fit and which ones to fix?
Rather than randomness, could the difference in the trials be due to order of the treatments (chemo early vs radiotherapy late and vice versa)?