Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

for people in deep learning. I want to have a small and performant test server and was thinking of running it on a geforce 3070 (8gb). considering AMD doesn't support CUDA would I kill myself getting a 6800XT instead?


I think you will still likely encounter a lot of upgrade headaches. If you can wait, see if PyTorch/TF add the 6800XT to their test servers (it seems they current test using a Vega 20, but I'm not certain).


You'll want tensor cores, so I would stick to the 2000 or 3000-series Nvidia cards. Not to mention the headaches of using AMD cards with any ML framework.


Honestly? I'd go with a Jetson dev kit. A couple hundred bucks and a whole lot of performance. Plus you get CUDA.


An old $100 PC with an RTX 2060 6GB would be a lot faster than a Jetson NX and only about $400 total.


I was looking at the 4gb version would I be abble to do most of the kaggle as practice?


Careful, there's no ML drivers for Ampere and FP32/64 is capped.

These are gaming cards. They'll charge you more for the compute cards, when/if they're available.


Not to reply to you twice in two different threads but for this threads sake you can absolutely run things like TF on them and they absolutely do kick ass in terms of performance per dollar when you do. I.e. look into what you need out of "ML" and CUDA and the perf/dollar before assuming you need to buy a Quadro simply because you're not playing games on it.

https://www.evolution.ai/post/benchmarking-deep-learning-wor...


Still useful for ML.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: