Replies: 3 comments 2 replies
-
|
Beta Was this translation helpful? Give feedback.
-
|
Also in favor of dropping the
I would find it easier to comprehend if it's still described in the docs as n distributions and m transformations, which are combined into a
I am in favor of keeping log10 as an option for parameter priors. If I use that as a hint to the optimizer to optimize a parameter on log10 scale, then it makes debugging easier than log scale. But I don't see a reason to keep log10 for observables. Unrelated: |
Beta Was this translation helpful? Give feedback.
-
|
In favor of dropping |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Continuing the discussion from #613, which went a bit off-topic there:
Shall we merge
observable.noiseDistributionandobservable.observableTransformationin PEtab v2?I.e. instead of
{noiseDistribution=normal, observableTransformation=log}, one could just specify{noiseDistribution=log-normal}.I am in favor. I think just saying that the measurement error is log-normally distributed is more understandable than talking about observable transformations.
The only downside I can see, is that instead of listing$n$ distributions and $m$ transformations, we'd have to list $n*m$ distributions in the specs.
Related question: do we need log10-type distributions (i.e. the current
observableTransformation=log10)? I can see how those distributions are more interpretable in certain contexts (replications, orders of magnitude, ...), but I am not sure how relevant this is in the PEtab context. Do you have good use cases?Beta Was this translation helpful? Give feedback.
All reactions