Nvidia Volta - Titan V thread

Message boards : Number crunching : Nvidia Volta - Titan V thread
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · Next

AuthorMessage
hasherati

Send message
Joined: 9 Oct 17
Posts: 11
Credit: 3,123,660
RAC: 0
United States
Message 1905704 - Posted: 8 Dec 2017, 20:41:36 UTC

Wanted to start a discussion here on the heels of Nvidia's release of the Titan V card based on the Volta architecture. I put my order in yesterday and it's going to be 3-5 days until I can report if things work as expected but wanted to ask here if anyone knows whether the client is fully optimized to work with Volta already?
ID: 1905704 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 12035
Credit: 119,784,065
RAC: 43,448
United Kingdom
Message 1905716 - Posted: 8 Dec 2017, 20:59:17 UTC - in response to Message 1905704.  

Eric is complaining that it doesn't have enough video memory to use as a front-line telescope signal pre-processor.
ID: 1905716 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 9977
Credit: 130,718,632
RAC: 80,966
Australia
Message 1905741 - Posted: 8 Dec 2017, 22:57:38 UTC - in response to Message 1905704.  
Last modified: 8 Dec 2017, 23:06:41 UTC

... but wanted to ask here if anyone knows whether the client is fully optimized to work with Volta already?

The Linux special application is the one able to get the most work out of it at this stage, next in line is the SoG application if running Windows (your GTX 1080Tis could produce a lot more work than they do with the right command line values).
However considering it's just been released i'd be truly amazed if anyone has had a go at working on code that will take advantage of it's improved CUDA architecture, let alone make use of it's Tensor cores.

EDIT- it looks like NVidia have already released a driver with Titan V support, but there are already reports of issues (TDR errors, blanking during Bluray playback and display blanking on Gsync monitors under certain conditions).
Most of the issues appear so far to be video related- and while this is a video card, it is really a compute card video output. The only possible show stopper for crunching could be the TDR issues. Are they occurring in games/video display, or when crunching?
Could be a while before the driver support is mature.
Grant
Darwin NT
ID: 1905741 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4611
Credit: 295,600,315
RAC: 602,440
United States
Message 1905792 - Posted: 9 Dec 2017, 2:31:51 UTC

Just need to get one into Petri's hands. He already is aware of Tensor computing and has been reading up on it.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1905792 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4611
Credit: 295,600,315
RAC: 602,440
United States
Message 1905796 - Posted: 9 Dec 2017, 2:42:46 UTC - in response to Message 1905716.  

Eric is complaining that it doesn't have enough video memory to use as a front-line telescope signal pre-processor.

I wondered about that myself. Why not 16GB of memory? And why not GDDR6? Why are you lining the pockets of your competitor with use of HBM2 memory?

My cynicism says that 6 months hence will be the AIB versions of the board and performance that embarrasses the Nvidia sourced product. Like what happened with the Titan Xp and the 1080Ti.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1905796 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 9977
Credit: 130,718,632
RAC: 80,966
Australia
Message 1905802 - Posted: 9 Dec 2017, 2:59:30 UTC - in response to Message 1905796.  

I wondered about that myself. Why not 16GB of memory?

Because that is on their Tesla Volta cards that sell for $10,000.

And why not GDDR6?

Because it's not being produced in volume yet.
Noises are next year, and maybe in the 1st half there should be products using it. Maybe.
Jan 17th, CES 2018 is when people are anticipating an announcement by NVidia on Volta based consumer video cards using GDDR6, and when they will be made available.

Why are you lining the pockets of your competitor with use of HBM2 memory?

Because at $10,000+ for each of the cards they sell that uses it (and now an extra $3,000 per card for chips that otherwise wouldn't have been used), they make themselves a lot more money in the long run than if they had waited for GDDR6 to be ready before releasing Volta.
Grant
Darwin NT
ID: 1905802 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4611
Credit: 295,600,315
RAC: 602,440
United States
Message 1905823 - Posted: 9 Dec 2017, 4:38:51 UTC - in response to Message 1905802.  

I wondered about that myself. Why not 16GB of memory?

Because that is on their Tesla Volta cards that sell for $10,000.

And why not GDDR6?

Because it's not being produced in volume yet.
Noises are next year, and maybe in the 1st half there should be products using it. Maybe.
Jan 17th, CES 2018 is when people are anticipating an announcement by NVidia on Volta based consumer video cards using GDDR6, and when they will be made available.

Why are you lining the pockets of your competitor with use of HBM2 memory?

Because at $10,000+ for each of the cards they sell that uses it (and now an extra $3,000 per card for chips that otherwise wouldn't have been used), they make themselves a lot more money in the long run than if they had waited for GDDR6 to be ready before releasing Volta.

I wonder how many cards ($10K or $3K) actually get sold between announcement now and next year. Part of the high retail cost for these cards is the very large proportion of costs allotted to the HBM2 memory which is still very expensive because of the very poor yields they are achieving. And the yields for the NV100 chip can't be great either because it is HUGE. If you had waited till next year when GDDR6 was viable and possibly a die-shrink because yields on 10nm process chips would have finally improved, you might have been able to sell more cards at a better price point. I think most of this announcement is just PR for Nvidia and is not going to amount to much adding to their bottom line. My $0.02 of gazing into my crystal ball.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1905823 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 9977
Credit: 130,718,632
RAC: 80,966
Australia
Message 1905835 - Posted: 9 Dec 2017, 5:16:45 UTC - in response to Message 1905823.  

I wonder how many cards ($10K or $3K) actually get sold between announcement now and next year.

The $10,000+ cards have been shipping since late 2016 (prices for those early GPUS were roughly $19,000 each). Flogging off the $3,000 cards just means they're not writing off as much silicon due to wastage (it's been suggested these cards are ones that didn't pass final inspections for the V100 Tesla cards, but are suitable for what the Nivida Titan V will be).

Part of the high retail cost for these cards is the very large proportion of costs allotted to the HBM2 memory which is still very expensive because of the very poor yields they are achieving. And the yields for the NV100 chip can't be great either because it is HUGE.

The HBM2 memory does add to the cost, but as you pointed out the size of the die is the biggest reason for their incredible expense. Basically they are a design that was ahead of/ at the very limit of chip manufacturing capabilities when they were released.
I expect the consumer release Volta cards won't have any Tensor cores, so that will reduce the die size significantly, along with reductions in the number of CUDA cores for each model.
Grant
Darwin NT
ID: 1905835 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4611
Credit: 295,600,315
RAC: 602,440
United States
Message 1905841 - Posted: 9 Dec 2017, 5:35:25 UTC - in response to Message 1905835.  


The HBM2 memory does add to the cost, but as you pointed out the size of the die is the biggest reason for their incredible expense. Basically they are a design that was ahead of/ at the very limit of chip manufacturing capabilities when they were released.
I expect the consumer release Volta cards won't have any Tensor cores, so that will reduce the die size significantly, along with reductions in the number of CUDA cores for each model.

I expect that too with the release of NV102 chipped consumer cards. Or will they really remask the design to eliminate the Tensor core subsystem... or just fuse off the failed parts of the NV100 chip with flaws in that subsystem. Building up new masks is expensive but could be the smart way to achieve that goal as then they could use standard size reticles. They are at the absolute limit with the reticle size now with the current design.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1905841 · Report as offensive
hasherati

Send message
Joined: 9 Oct 17
Posts: 11
Credit: 3,123,660
RAC: 0
United States
Message 1905959 - Posted: 9 Dec 2017, 17:30:16 UTC - in response to Message 1905835.  


I expect the consumer release Volta cards won't have any Tensor cores, so that will reduce the die size significantly, along with reductions in the number of CUDA cores for each model.


Specs on the web site say there are 640 tensor cores in the Titan V.
https://www.nvidia.com/en-us/titan/titan-v/?nvid=nv-int-tnvptlh-29190
ID: 1905959 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4611
Credit: 295,600,315
RAC: 602,440
United States
Message 1905964 - Posted: 9 Dec 2017, 18:08:14 UTC - in response to Message 1905959.  

We're not talking about the workstation cards. We're talking about the future cut-down NV102 consumer level cards that will be released by the AIB partners next year.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1905964 · Report as offensive
Al Special Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1620
Credit: 351,936,676
RAC: 314,512
United States
Message 1907303 - Posted: 15 Dec 2017, 22:40:40 UTC - in response to Message 1905704.  

Wanted to start a discussion here on the heels of Nvidia's release of the Titan V card based on the Volta architecture. I put my order in yesterday and it's going to be 3-5 days until I can report if things work as expected but wanted to ask here if anyone knows whether the client is fully optimized to work with Volta already?
hasherati, it's been a week now, you have a chance to see what those bad boys are capable of yet? :-)

ID: 1907303 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4611
Credit: 295,600,315
RAC: 602,440
United States
Message 1907306 - Posted: 15 Dec 2017, 22:46:05 UTC - in response to Message 1907303.  

I doubt he has received the card yet. And also question if any project or BOINC will understand the card yet even if the latest drivers support it.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1907306 · Report as offensive
Al Special Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1620
Credit: 351,936,676
RAC: 314,512
United States
Message 1907308 - Posted: 15 Dec 2017, 22:57:57 UTC - in response to Message 1907306.  

Oh, the joys of living on the bleeding edge, eh?

ID: 1907308 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4611
Credit: 295,600,315
RAC: 602,440
United States
Message 1907331 - Posted: 15 Dec 2017, 23:58:12 UTC - in response to Message 1907308.  

As a Ryzen user, yes, I sure agree with that sentiment.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1907331 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 4445
Credit: 260,503,839
RAC: 8,525
United States
Message 1907365 - Posted: 16 Dec 2017, 2:02:26 UTC - in response to Message 1907303.  

There's an article on it I read today. For Single precision, it's maybe marginally faster than a 1080Ti, For gaming, why bother. For Double precision, it ROCKS

If you are a developer or coder that can utilize double precision compute, however, the Titan V looks like a must-have product. That's a touch thing to type out for anything with a price tag in that range, but we are talking about a GPU that offers 10-14x better performance in some key performance metrics including N-body simulation, financial analysis, and shader-based compute.
They tested it out on Folding at home and it was outstanding

For cryptocurrency, it's faster than anything out there but at a price tag of $3000, doesn't make sense to use it.

Only thing I don't like is the memory speed, much lower than a 1080ti

Guess we need to see how it actually does, but you would think it would be outstanding for DP projects
ID: 1907365 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4611
Credit: 295,600,315
RAC: 602,440
United States
Message 1907366 - Posted: 16 Dec 2017, 2:07:06 UTC

Where was today's article? Yes, the memory problem is from HBM2 it seems. Maybe next year when the consumer cut-down version ships, it will use GDDR6 memory which has much better bandwidth than even GDDR5X. It should work great on MilkyWay which demands DP processing and the N-body tasks at Einstein should benefit too.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1907366 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 4445
Credit: 260,503,839
RAC: 8,525
United States
Message 1907392 - Posted: 16 Dec 2017, 3:26:07 UTC - in response to Message 1907366.  

Where was today's article? Yes, the memory problem is from HBM2 it seems. Maybe next year when the consumer cut-down version ships, it will use GDDR6 memory which has much better bandwidth than even GDDR5X. It should work great on MilkyWay which demands DP processing and the N-body tasks at Einstein should benefit too.


https://www.pcper.com/reviews/Graphics-Cards/NVIDIA-TITAN-V-Review-Part-2-Compute-Performance/Where-Volta-stands-today
ID: 1907392 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4611
Credit: 295,600,315
RAC: 602,440
United States
Message 1907399 - Posted: 16 Dec 2017, 4:15:41 UTC - in response to Message 1907392.  

Thanks for the link, Z-man. I've always liked PCPer's reviews. Interesting read.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1907399 · Report as offensive
hasherati

Send message
Joined: 9 Oct 17
Posts: 11
Credit: 3,123,660
RAC: 0
United States
Message 1907403 - Posted: 16 Dec 2017, 4:31:37 UTC - in response to Message 1907303.  

Sorry guys, been tied up and I must admit, the first thing it did was test Ethereum mining :) I'm hitting 77 MHps which is great, but agree, doesn't make a lot of sense to use a $3000 card only to make amount to $3 a day in coin. The other card is sitting in a box unopened. I'm at a large silicon valley tech firm and we had Nvidia over for an AI/ML talk this week. In talking with their sales rep, they think the first batch of Titan V's is going to sell out pretty quickly so am considering putting the other on on E-bay NIB once they sell out to make some of the $ back. It's a gamble but we'll see. Let me flip over to Boinc and see what this will do.
ID: 1907403 · Report as offensive
1 · 2 · 3 · Next

Message boards : Number crunching : Nvidia Volta - Titan V thread


 
©2018 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.