Server (computing) | |||||
Ve kdo kakšno rešitev? _________________ MacbookPro (early 2011): dual-core i5 2.3gzh, 8GB ram, 320GB HDD iPhone 6s+, 64 GB, iOS 10 Own (history): iPhone 4s, 32 GB, iOS 6.1.1 |
|||||
![]() |
|||||
|
|||||
![]() |
|||||
Tist kar bi jaz rabil, pride 40$ na mesec... Tisto kar pa zastonj ponujajo, pa mam jaz boljš pogoje se mi zdi... Torej, rabim čimveč procesorjev, malo rama in še manj trdega diska. Kje to dobit? _________________ MacbookPro (early 2011): dual-core i5 2.3gzh, 8GB ram, 320GB HDD iPhone 6s+, 64 GB, iOS 10 Own (history): iPhone 4s, 32 GB, iOS 6.1.1 |
|||||
![]() |
|||||
|
|||||
![]() |
|||||
![]() |
|||||
![]() |
|||||
![]() Hmm, težko optimiziram kodo. Gre se za množenje matrik v numpy-ju. ![]() primer množenja matrik: C = np.dot(A,B) _________________ MacbookPro (early 2011): dual-core i5 2.3gzh, 8GB ram, 320GB HDD iPhone 6s+, 64 GB, iOS 10 Own (history): iPhone 4s, 32 GB, iOS 6.1.1 |
|||||
![]() |
|||||
Sicer se s tem še nisem ubadal … ampak bi vseeno probal: en link: http://deeplearning.net/software/theano/tutorial/using_gpu.html |
|||||
![]() |
|||||
![]() _________________ MacbookPro (early 2011): dual-core i5 2.3gzh, 8GB ram, 320GB HDD iPhone 6s+, 64 GB, iOS 10 Own (history): iPhone 4s, 32 GB, iOS 6.1.1 |
|||||
![]() |
|||||
Katerakoli grafična, magari integrirana je boljša pri operacijah z vektorji in matrikami kot navaden CPU. Imaš openCL tudi za intel grafične (integrirane). What Can Be Accelerated on the GPU The performance characteristics will change as we continue to optimize our implementations, and vary from device to device, but to give a rough idea of what to expect right now: Only computations with float32 data-type can be accelerated. Better support for float64 is expected in upcoming hardware but float64 computations are still relatively slow (Jan 2010). Matrix multiplication, convolution, and large element-wise operations can be accelerated a lot (5-50x) when arguments are large enough to keep 30 processors busy. Indexing, dimension-shuffling and constant-time reshaping will be equally fast on GPU as on CPU. Summation over rows/columns of tensors can be a little slower on the GPU than on the CPU. Copying of large quantities of data to and from a device is relatively slow, and often cancels most of the advantage of one or two accelerated functions on that data. Getting GPU performance largely hinges on making data transfer to the device pay off. |
|||||
![]() |
|||||
![]() Vhala, nummy in No9. ![]() _________________ MacbookPro (early 2011): dual-core i5 2.3gzh, 8GB ram, 320GB HDD iPhone 6s+, 64 GB, iOS 10 Own (history): iPhone 4s, 32 GB, iOS 6.1.1 |
|||||
![]() |
![]() |
![]() |
![]() ![]() ![]() |