Nvidia Flare: ML/DL models Limitations and Potentials #1726
-
I am interested in understanding the limitations of Nvidia Flare. I’m aware that it supports several frameworks, such as PyTorch, TensorFlow, Numpy, or MONAI. I am wondering if any model built using these frameworks would be able to function in a federated manner. Is there a particular subset of models that would be better suited for use in this platform? Moreover, I relatively sure that I can construct models such as neural networks, logistic regression, linear regression, support vector machines, k-means, XGBoost, and random forests using Nvidia Flare. However, I am curious about the feasibility of implementing other models, such as Naive Bayes, decision trees or gradient boosting, just for mentioning some. Would these models also be compatible with Nvidia Flare? Thanks in advance! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @Datumologist thanks for the questions! My responses:
|
Beta Was this translation helpful? Give feedback.
@YuanTingHsieh gives a detailed answer, adding an additional point for "compatibility":
XGBoost and random forest are two of the most widely used methods in "decision trees or gradient boosting" family, as for Naive Bayes, it can be formulated in a distributed/federated way (and implemented with NVFlare) by collecting the stats. So they can all be implemented with NVFlare
As yuanting mentioned, the real question is "would/how these models be compatible with federated learning", and there is no "compatibility" issue on NVFlare side as long as we can formulated a method under a federated setting, and have sufficient API support from the package we use (e.g. scikit learn).
We will have a blo…