Who can see your viewing activity?
Enable screen share?
Oleg Demin Jr
Please allow me to share my screen too. I'm the second speaker
Same for me, I am the fourth speaker. Thanks!
Oleg Demin Jr
Now I can. Thank you!
I think it is fixed now!
I'm curious about which models this method speeds up vs which models this method does not speed up. A la the "No Free Lunch Theorem" (optimization methods are equivalent when averaged over all possible models), it's important to characterize the classes of models that a speed up works for, to fairly compare to methods that work on larger classes of models.
Question to Christopher: does the method described in presentation work for any RHS non-linearities or only for mass action laws?
How long does it take to train your neural network to obtain your surrogate model, i.e. what is the upfront compute cost before you can start benefiting from the reduced dimensionality of the surrogate model?
Question for Oleg. In the example #3 for Pop generation, is the identification of the parameters that need to be recalibrated done automatically and recursively?
Could gender or age impact be taken into account in Vpop?
Are you saying that it is better to have extra parameters varied (rather than fixing parameters that are not identifiable from the data)? If so many parameters are allowed to vary, then the parameter space is highly undersampled, so it is not clear to me that allowing them all to vary is a better approach than fixing some.
When doing such vpops of PBPK simulations, specifically on the PK version and looking to incorporate covariate effects like sex or weight, is there a reason to not take an iPSP approach and characterize the variability using nonlinear mixed effects approaches? [Given that the NLME training is fast/robust enough.]
1) Was the sensitivity analysis performed by varying one parameter at a time, or multiple parameters at a time? 2) Was the sensitivity measure normalized to account for different scales of different parameters?
Given that your QSP model is a qualitative one with discrete node levels (0 / 1) and logic-based rules, how do you get the continuous-looking plots shown in the talk? Do you average over multiple asynchronous runs to obtain these?
Could you give a bit more detailed on how the logical interactions you mentioned are represented mathematically and simulated?
Question to Sietse: There are two strategy of model building/analysis: (1) Building a model, then SA and then calibration, (2) building then calibration then SA. In your presentation you have described the strategy (1). Do you think that the strategy (1) is more appropriate than strategy (2) when we are dealing with QSP models? If yes, why?
Question 2 to Sietse: If your QSP model includes more than 100 ODEs and you need to identify hundreds of parameters against hundreds of datasets, how you perform practical identifiability analysis for such model? What method and software do you use for the task?
The recent scientific machine learning techniques cause a feedback where the parameter estimation and uncertainty quantification can generate symbolic hypotheses of missing equations in one's model (for example, methods like "Universal Differential Equations for Scientific Machine Learning" and the SINDy work). Is it a major change to pre-clinical modeling pipelines to incorporate such methods? Or are the checks and balances within pharmaceutical QSP teams easily adaptable to this kind of change in model design?
Question to Sietse: Could you talk on the impact of data quantity and quality on all the aspects of model building you mentioned?
Throwing an answer to Oleg: https://github.com/insysbio/LikelihoodProfiler.jl is a really fantastic tool for practical identifiability from InSysBio. See their paper for the performance stats: it scales well. For structural identifiability, https://github.com/SciML/ModelingToolkit.jl has a compiler trick to perform SI on arbitrary ODE codes (without rewriting the model!)
LikelihoodProfiler Table 1 https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1008495 shows it outperforming SimBiology by ~500x on large ODE models for practical identifiability.
Thank you Chris!
Thank you for the opportunity. This was a fun one.
Thank you everyone for participation. This session is recorded and will be available to view later