Zoom Logo

MFBM MS-6 - Shared screen with speaker view
Christopher Rackauckas
03:11
Enable screen share?
Oleg Demin Jr
04:05
Please allow me to share my screen too. I'm the second speaker
Sietse Braakman
04:47
Same for me, I am the fourth speaker. Thanks!
Oleg Demin Jr
05:03
Now I can. Thank you!
Math Dept
05:49
I think it is fixed now!
Helen Moore
18:33
I'm curious about which models this method speeds up vs which models this method does not speed up. A la the "No Free Lunch Theorem" (optimization methods are equivalent when averaged over all possible models), it's important to characterize the classes of models that a speed up works for, to fairly compare to methods that work on larger classes of models.
Oleg Demin
18:54
Question to Christopher: does the method described in presentation work for any RHS non-linearities or only for mass action laws?
Sietse Braakman
21:28
How long does it take to train your neural network to obtain your surrogate model, i.e. what is the upfront compute cost before you can start benefiting from the reduced dimensionality of the surrogate model?
Vincent Lemaire
40:20
Question for Oleg. In the example #3 for Pop generation, is the identification of the parameters that need to be recalibrated done automatically and recursively?
Jane.Bai@fda.hhs.gov
42:32
Could gender or age impact be taken into account in Vpop?
Helen Moore
43:37
Are you saying that it is better to have extra parameters varied (rather than fixing parameters that are not identifiable from the data)? If so many parameters are allowed to vary, then the parameter space is highly undersampled, so it is not clear to me that allowing them all to vary is a better approach than fixing some.
Christopher Rackauckas
44:50
When doing such vpops of PBPK simulations, specifically on the PK version and looking to incorporate covariate effects like sex or weight, is there a reason to not take an iPSP approach and characterize the variability using nonlinear mixed effects approaches? [Given that the NLME training is fast/robust enough.]
Helen Moore
59:32
1) Was the sensitivity analysis performed by varying one parameter at a time, or multiple parameters at a time? 2) Was the sensitivity measure normalized to account for different scales of different parameters?
Shubham Tripathi
01:02:25
Given that your QSP model is a qualitative one with discrete node levels (0 / 1) and logic-based rules, how do you get the continuous-looking plots shown in the talk? Do you average over multiple asynchronous runs to obtain these?
Vincent Lemaire
01:03:22
Could you give a bit more detailed on how the logical interactions you mentioned are represented mathematically and simulated?
Oleg Demin
01:20:52
Question to Sietse: There are two strategy of model building/analysis: (1) Building a model, then SA and then calibration, (2) building then calibration then SA. In your presentation you have described the strategy (1). Do you think that the strategy (1) is more appropriate than strategy (2) when we are dealing with QSP models? If yes, why?
Oleg Demin
01:25:05
Question 2 to Sietse: If your QSP model includes more than 100 ODEs and you need to identify hundreds of parameters against hundreds of datasets, how you perform practical identifiability analysis for such model? What method and software do you use for the task?
Christopher Rackauckas
01:26:24
The recent scientific machine learning techniques cause a feedback where the parameter estimation and uncertainty quantification can generate symbolic hypotheses of missing equations in one's model (for example, methods like "Universal Differential Equations for Scientific Machine Learning" and the SINDy work). Is it a major change to pre-clinical modeling pipelines to incorporate such methods? Or are the checks and balances within pharmaceutical QSP teams easily adaptable to this kind of change in model design?
Vincent Lemaire
01:26:30
Question to Sietse: Could you talk on the impact of data quantity and quality on all the aspects of model building you mentioned?
Christopher Rackauckas
01:29:51
Throwing an answer to Oleg: https://github.com/insysbio/LikelihoodProfiler.jl is a really fantastic tool for practical identifiability from InSysBio. See their paper for the performance stats: it scales well. For structural identifiability, https://github.com/SciML/ModelingToolkit.jl has a compiler trick to perform SI on arbitrary ODE codes (without rewriting the model!)
Christopher Rackauckas
01:32:41
LikelihoodProfiler Table 1 https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1008495 shows it outperforming SimBiology by ~500x on large ODE models for practical identifiability.
Khamir
01:33:23
Thank you Chris!
Christopher Rackauckas
01:33:55
Thank you for the opportunity. This was a fun one.
Khamir
01:33:59
Thank you everyone for participation. This session is recorded and will be available to view later