octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Gsoc 2020 Idea Discussion


From: Atharva Dubey
Subject: Gsoc 2020 Idea Discussion
Date: Tue, 25 Feb 2020 20:23:55 +0530

Respected Sir
mlpack is an open-source software as, according to their distribution clause, they state that the distribution of MLpack source/binaries is permitted as it is with the BSD license with or without modifying the code. Nvidia's cuDNN library is an SDK and can be used without any potential ways of violating the licensing terms as it says so on their licensing page (https://docs.nvidia.com/deeplearning/sdk/cudnn-sla/index.html#general)

Also, Cuda binaries or header files need not be distributed from our side. The user would need to have Cuda cuDNN installed on his system beforehand. So is the case with popular libraries like TensorFlow and PyTorch. 

Also, this would not be a workaround or hack because when a person installs the toolbox, he/she would be getting the runtime files and precompiled binaries. So, for example, let's say we have a function to train the model saying trainModel(Param1, Param2, Param3, Cuda). When this function is called from the Octave interface, Octave would call a C++ file, actually telling what do (this is what I meant by a c++ backend). If the Cuda parameter is True, the model would load in the GPU and speed up the process. This is where the role of Cuda libraries come and Cuda SDK would come into the picture. Since we would not be distributing the library or any file in any form, licensing cannot be an issue. When Cuda is true, the control would be passed onto the Cuda files which would be preinstalled into the user's system. 

As mentioned, parallelization libraries are architecture-specific. Using Nvidia GPUs for DL/ML tasks is the industry norm and every library/software(like Matlab) provides it. Therefore I thought giving Cuda support to octave would be nice. 

Please share your thoughts on it. 
Thanks and Regards 
Atharva Dubey

reply via email to

[Prev in Thread] Current Thread [Next in Thread]