Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Round 2, Reviewer 3 comments #16

Open
8 tasks done
labarba opened this issue Oct 11, 2021 · 9 comments
Open
8 tasks done

Round 2, Reviewer 3 comments #16

labarba opened this issue Oct 11, 2021 · 9 comments

Comments

@labarba
Copy link
Member

labarba commented Oct 11, 2021

Reviewer #3 (Remarks to the Author):

In Table 6, why are the computational domains chosen as cubics for all proteins for APBS? It is unfair for APBS in terms of efficiency.

In Table 7, solvation free energy comparison shows very large differences (i.e., up to 1.9%) between MIBPB and the proposed method for some molecules. This is a series problem. The convergence of MIBPB is known in the literature (see for example: Nguyen et al. "Accurate, robust, and reliable calculations of Poisson–Boltzmann binding energies." Journal of computational chemistry 38.13 (2017): 941-948.). As shown in Figure 3 of this reference, the averaged relative absolute error of the electrostatic solvation free energies for all the 153 molecules with mesh size refinements from 1.1 to 0.2 ̊A is under 0.3%. I suggest the authors to carry out the same calculations at Nguyen et al. to find out the averaged relative absolute errors of the present method and plot them against those of MIBPB. Please report the numbers of elements at all resolutions.

As noticed by Fenley and coworkers (Influence of Grid Spacing in Poisson-Boltzmann Equation Binding Energy Estimation, J Chem Theory Comput. 2013 August 13; 9(8): 3677–3685), the calculations of Poisson–Boltzmann binding energies is a challenging task. The authors need to produce reliable Poisson–Boltzmann binding energies as shown by Nguyen et al. (see Figure 6 of the above-mentioned JCC reference) for the challenging task proposed Fenley and coworkers. This test will reveal the level of performance of the present method.

Reviewer #3 (Remarks to the Author After Authors' Reply):

  • The authors appear to deflect an essential problem for their software: It does not converge as the state-of-art schemes do. There is indeed no exact solution for electrostatic solvation free energies for biomolecules. However, pointed out by Fenley and coworkers, grid independence is essential. Fenley and Amaro (https://doi.org/10.1007/978-3-319-12211-3_3) pointed out many years ago that APBS has a convergence problem. The same problem was pointed out by Geng and Krasny. Although APBS is one of the most popular PB solvers in the user community, it is well known in the community of PB and GB developers that APBS is not the most trusted solver. For example, the generalized Born (GB) solver in Amber is calibrated with MIBPB, rather than APBS (see the work of Onufriev). It is a bad idea to compare convergence patterns with APBS.

  • While APBS has not resolved this problem, researchers have put much effort to improve the convergence of DelPhi, PBFD used in Amber, and MIBPB in the past decade. It is clear for me that judged by Table 2, Bempp has the same convergence problem as APBS does, which has been criticized by Amaro, Fenley and coworkers, and many others in the literature. If the authors do want to admit this problem, they should compare the convergence of Bempp with that of MIBPB for the molecules reported by Nguyen et al. I am quite sure that Bempp is not convergent as DelPhi, PBFD used in Amber, and MIBPB. It is inappropriate for the authors to make unverified statements as PB method developers.

  • Three tables (i.e., Table 2, Table 6, and Table 7) appear to be designed to mislead. They should be merged into one table in which APBS, DelPhi, PBFD, MIBPB, and Bempp are compared at mesh sizes 0.25, 0.5, and 1.0 to analyse their convergence rates.

  • It is well known that MIBPB is not as fast as DelPhi and APBS at a given mesh for a given protein. However, it might outperform all other methods in terms of efficiency in the sense that at a given convergence level, it requests the smallest amount of time. It is unfair to compare the execution time of MIBPB in the paper. But the authors should compare efficiency after they have established the convergence characteristics of their method.

  • Thank the authors for bringing my attention to the work of Geng et al in 2007. Can the authors use the designed solutions in that paper to validate their method?

  • It is not valid to use different boundary settings and different formulations to skip necessary comparisons as suggested in my earlier comments. All methods should give essentially the same solvation free energy for a given protein with a given interface definition.

  • The Galerkin formulation is not automatically immune Bempp from the problem of geometric singularity. The geometric singularity from protein solvent excluded surfaces is much worse than Lipschitz as shown by Geng and Krasny in their work. Krasney and Geng have been working on this issue for more than ten years. The authors need to define and construct high-order elements to achieve desirable convergence. I cannot find much-related information about this aspect in this manuscript.

  • It might be misleading to use the Zika virus (PDB ID 6C08) as an example. This protein complex is highly symmetric. It would be silly to not make use of its symmetry in computations. It would be deceiving if symmetry is used. I would suggest the authors use the HIV viral capsid (1E6J), which is far less symmetric than the Zika viral capsid. Frankly, with the help of GPU and parallel architectures, it is quite easy for any of the above mentioned PB solvers to produce electrostatic analysis of these viruses.

@labarba
Copy link
Member Author

labarba commented Oct 11, 2021

The authors appear to deflect an essential problem for their software: It does not converge as the state-of-art

This reviewer's claim is not backed up by any justification, and contradicts the results presented. Our results show clearly that our solver converges as expected for a boundary element method. Let us quote Reviewer #6:

  • "Mesh-refinement studies confirm convergence as 1/N, for N boundary elements…"

As explained in the paper, a convergence rate of 1/N is the same O(h^2) for a mesh spacing of h (second-order convergence). According to Chen et al. 2011, MIBPB converges as h^2.

The reviewer claims that a Fenley and Amaro article (https://doi.org/10.1007/978-3-319-12211-3_3) points to a convergence problem with the APBS solver. That is incorrect. Let us cite from Fenley and Amaro:

  • "Results calculated using APBS display a clear convergence behavior with the grid spacing, making it easier to judge what an appropriate grid spacing is for the biological system being studied. DelPhi and PBSA even at large grid spacing provide a reasonable estimate of Epolar(bind) and display only minor fluctuations as a function of the grid spacing.”

What their paper does say is that APBS is not great with large grid spacings, compared to other codes (DelPhi and PBSA). But as the mesh is refined, they all converge to a similar answer (Fig 3.2 of the Fenley and Amaro reference). In this same article, they recommend using a mesh spacing of 0.5 Å or less. Our simulations with APBS use meshes that are finer that this.

Geng and Krasny (https://www.sciencedirect.com/science/article/pii/S0021999113002404) do not point out any convergence issues. Fig 8 of their paper shows the solvation energy obtained with APBS decreasing linearly with mesh spacing, and Table 4 shows the error in this quantity converging linearly.

The fact that Amber parameterizes their GB model with MIBPB rather than APBS is just a choice. MIBPB does claim to be more accurate than APBS for the same mesh size (Chen et al. 2011), and the choice of using MIBPB might be related to that.

@labarba
Copy link
Member Author

labarba commented Oct 11, 2021

It is clear for me that judged by Table 2, Bempp has the same convergence problem as APBS does,

As explained in the previous point, there is no convergence problem with APBS. Granted, APBS converges as O(h), i.e., it is first order, which is slower than other methods—we observed and show linear convergence of APBS in our results (as did Geng and Krasny). However, the claim of the reviewer is discrepant: how can two solvers, one based on partial differential equations solved by the finite difference method, and the other based on integral equations solved by the boundary element method, have the "same" convergence problem? This is nonsensical.

@labarba
Copy link
Member Author

labarba commented Oct 11, 2021

Three tables (i.e., Table 2, Table 6, and Table 7) appear to be designed to mislead.

Even if this is not a constructive statement, let us clarify. It is meaningless to compare different solvers that apply different numerical schemes at the "same" mesh spacing. Instead, we analyze and present the results for each solver individually—without the purpose to obfuscate, but because it is correct to do so.

@labarba
Copy link
Member Author

labarba commented Oct 11, 2021

It is well known that MIBPB is not as fast as DelPhi and APBS at a given mesh for a given protein. […] It is unfair to compare the execution time of MIBPB in the paper

We pointed out in the previous review replies that we’re not comparing timings. Our paper does not include execution times with MIBPB, so it is unclear how it can be unfair.

@labarba
Copy link
Member Author

labarba commented Oct 11, 2021

Thank the authors for bringing my attention to the work of Geng et al in 2007. Can the authors use the designed solutions in that paper to validate their method?

We have verified Bempp with Kirkwood sphere, a benchmark test for PB solvers. Problems with complex geometries do not have an analytical solution. We cannot use those designed solutions in Geng et al. 2007 (section III.A.2) to validate our method because: (1) it’s not exactly the PB equation, as there is an extra source term (function k(r)), and (2) the k(r) function is variable in space, which cannot be dealt with by a BEM approach.

@labarba
Copy link
Member Author

labarba commented Oct 11, 2021

It is not valid to use different boundary settings and different formulations to skip necessary comparisons as suggested in my earlier comments. All methods should give essentially the same solvation free energy for a given protein with a given interface definition.

Even extrapolation to infinite mesh resolution, modeling errors would remain, and these are different for different solvers, particularly if one is looking at a BEM solver and a finite-difference solver: these solve different mathematical equations.

@labarba
Copy link
Member Author

labarba commented Oct 12, 2021

The Galerkin formulation is not automatically immune Bempp from the problem of geometric singularity.

The reviewer is unclear what is meant by this comment. There’s a very remote possibility of having cusps in the solvent-excluded surface, which does not happen in any of the cases shown in our paper. The Reviewer did not point us at the work from Geng and Krasny where they discuss this issue, and we do not see any discussion of it in the published literature. The reviewer's claims also should be substantiated properly with citations, but they are not.

@labarba
Copy link
Member Author

labarba commented Oct 12, 2021

It might be misleading to use the Zika virus (PDB ID 6C08) as an example. This protein complex is highly symmetric. It would be silly to not make use of its symmetry in computations.

We use Zika to showcase calculation size, the capacity of the solver to handle large systems. That is the purpose of the demo. We might ask the reviewer to point us at easy ways to make use of this symmetry, however this is irrelevant for our claims.

The reviewer's comment about adopting GPU hardware and parallel computing is out of scope and seems to underestimate the difficulty involved. Moreover, our whole point is to provide ease of use to researcher, and this usually is low in HPC settings.

@labarba
Copy link
Member Author

labarba commented Oct 12, 2021

The reviewer does not acknowledge our remarks on different mathematical formulations and numerical schemes.

From Oberkampf and Roy (2010), Verification and Validation in Scientific Computing:

p.173–174

Following Trucant et al. (2003), code-to-code comparisons are only useful when (1) the two codes employ the same mathematical models and (2) the “reference” code has undergone rigorous code verification assessment or some other acceptable type of code verify canon. Even when these two conditions are met, code-to-code comparisons should be used with caution.
If the same models are not used in the two codes, then differences in the code output could be due to model differences and not coding mistakes. Likewise, agreement between the tow codes could occur due to the serendipitous cancellation of errors due to coding mistakes and differences due to the model. A common mistake made while performing code-to-code comparisons with codes that employ different numerical schemes (i.e., discrete equation) is to assume that the codes should produce the same (or very similar) output for the same problem with the same spacial mesh and/or time step. On the contrary, the code outputs will only be the same if exactly the same algorithm is employed, and even subtle algorithm differences can produce different outputs for the same mesh and time step.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant