Replies: 4 comments 14 replies
-
Concerning documentation for data parallel DeepXDE, I would create a section after Demos of Operator Learning and before FAQ, do you agree @lululxvi? The name would be: Data parallel acceleration. |
Beta Was this translation helpful? Give feedback.
-
Hi! Now, According to you @lululxvi, are there other Also, it would be interesting to study the data parallel acceleration for DeepONets. Which data class or example do you suggest me to start with? Thank you! |
Beta Was this translation helpful? Give feedback.
-
Hi @lululxvi, it would be very useful to add the capability for: to have other parameters. For example, Horovod has Adasum algorithm, or fp16 compression. See: Are you ok if I follow a similar structure to what was used for L-BFGS (defining the parameters as an external dictionary)? |
Beta Was this translation helpful? Give feedback.
-
Hi @lululxvi, which is the easiest way to obtain the training domain points (to split them appropriately)? Wouldn't it be easier to define here: Line 263 in 742ef4d
When I apply strong scaling, I want to split the domain training points over ranks. Here: Line 180 in 742ef4d it sounds like they correspond to the |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
tensorflow.compat.v1
: (Weak) data-parallel Horovod acceleration for tensorflow.compat.v1 #1205mpi4py
: updated Dockerfile and docker image #1278hvd.DsitributedOptimizer
: Added options for hvd.DistributedOptimzer #1285tensorflow.compat.v1
: Strong scaling data parallel acceleration for tensorflow.compat.v1 #1284Sobol
distribution for weak scalingBeta Was this translation helpful? Give feedback.
All reactions