I've almost been talked into purchasing a Threadripper 2950X CPU... Pro's & Con's?

Message boards : Number crunching : I've almost been talked into purchasing a Threadripper 2950X CPU... Pro's & Con's?
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · 4 . . . 6 · Next

AuthorMessage
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2006814 - Posted: 11 Aug 2019, 17:11:42 UTC

Well as I said I've almost been talked into a Threadripper 2950X CPU with 16 Cores, 32 Processing Threads, and an astonishing 40MB combined L2 & L3 Cache with an unprecedented 64 PCIe Gen3 lanes to meet large GPU and NVMe needs. I don't intend to overclock it, but it has a TDP of 180W with a max temp of 68C.

Amazon has a special(?) combining DDR4 2933 ram with the CPU. I'd like to get DDR4 3200 ram instead. Concerns?

I've been reading up on AMD's CPUs and I've noticed some restrictions, but this CPU seems to have less restrictions than others. If I run Linux Mint (at least until I've become accustomed to Linux) and run the special edition(?) of it, will it handle all 32 threads and run 2x RTX 2080-Ti GPUs with an NVME Samsung 970 EVO Plus 1TB SSD for boot up? I'll also have a Samsung 860 EVO 2TB SSD and a WD Black 2TB HDD for backup with it. With it all I'll place these items in an ASUS Prime X299-Deluxe II MB.

Thoughts? Concerns?
George

ID: 2006814 · Report as offensive
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2006815 - Posted: 11 Aug 2019, 17:17:49 UTC - in response to Message 2006814.  

Unless you NEED all the PCIe lanes and memory bandwidth (you probably don’t) then just wait for the 3950x. It will be faster.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2006815 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2006818 - Posted: 11 Aug 2019, 17:31:52 UTC - in response to Message 2006815.  
Last modified: 11 Aug 2019, 17:41:10 UTC

Unless you NEED all the PCIe lanes and memory bandwidth (you probably don’t) then just wait for the 3950x. It will be faster.

I concur. The 3950X will be cheaper in the end with better performance for both cpu and the memory and the cheaper AM4 motherboards.

Either way, I would go for faster memory. At least 3200CL14, 3200CL16, 3600CL15 or 3600CL16. Threadripper 2950X can get the memory up to at least 3400CL14 or 3466CL14. The 3950X will be able to get the memory up to at least 3600CL14 or even 3800CL16

[Edit] And if you are impressed by the 40MB of L3 cache in the TR 2950X . . . . consider the Ryzen 3950X has 72MB of L3 cache.

[Edit 2] Also you have listed an Intel motherboard in your post for an AMD processor. Won't work.

If you are considering TR4 because of the amount of PCIe lanes available, why hamstring yourself with the ASUS X399a Prime board that can only fit three double slot cards.

I would recommend the Asrock X399 Professional Gaming which has four double wide PCIe slots and fits four cards. That is the board I use. Works great.
https://www.amazon.com/ASRock-X399-Professional-Gaming-Motherboard/dp/B074J6LVBD
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2006818 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2006841 - Posted: 11 Aug 2019, 20:50:54 UTC - in response to Message 2006818.  
Last modified: 11 Aug 2019, 20:53:11 UTC

Thanks Keith and Ian&Steve for clarifying something for me. Maybe I will wait for the 3950X. But I'm a bit perplexed. I read in https://pcper.com/2019/06/amd-16-core-ryzen-9-3950x-processor// that AM4 still offers only 20 lanes from CPU. Is this true? Are there limitations in all motherboards with AMD?

And yes, I made a mistake in posting an Intel x299 MB. I meant to post ASUS PRIME X399-A AMD Threadripper TR4 MB (which happens to be ~$90 cheaper than the ASRock X399 TR4 board).

I can only afford 1 RTX 2080-Ti GPU for now, and I will buy another when my budget says so. So I don't see a reason for going for more double-wide PCIe slots. When I do get a second card I will also get Nvidia's NVLink which, according to Nvidia, "The GeForce RTX NVLink bridge connects two NVLink SLI-ready graphics cards with 50X the transfer bandwidth of previous technologies."

Assuming I do get a 3950X CPU, I like the idea of having faster memory. The G.Skill DDR4 3600 MHz CL14 appeals to me, but Amazon shows no listing and Newegg is sold out. And of course, they indicate the memory is for the X570 MB. https://www.techpowerup.com/257193/g-skill-announces-trident-z-neo-ddr4-memory-series-for-amd-ryzen-3000 Corsair does not even show a DDR4 3600MHz CL14 in the ASUS QVL for their Prime x470-Pro MB.

I'll wait until the 3950X comes out and goes through it's pains of "sold out" & "prices higher than list". I don't need to be the first one on the block to have one. Maybe by then the rest of the market will settle down and be stable as far a prices and supplies go.
George

ID: 2006841 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 38177
Credit: 261,360,520
RAC: 489
Australia
Message 2006842 - Posted: 11 Aug 2019, 20:59:56 UTC

The NVLink bridge is only needed by gamers using SLi and totally useless for crunchers. ;-)

Cheers.
ID: 2006842 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 2006843 - Posted: 11 Aug 2019, 21:01:31 UTC - in response to Message 2006841.  

that AM4 still offers only 20 lanes from CPU. Is this true? Are there limitations in all motherboards with AMD?


There's been a move away from more PCIe from the CPU for some strange reason. More so for Intel than AMD. Most Intel only offer 16 pcie lanes unless you go with the top of the line, in which case you get 44 pcie lanes. AMD is actually going the other way and giving you more.

I can only afford 1 RTX 2080-Ti GPU for now, and I will buy another when my budget says so.
So why the 2080Ti? why not 2-2080 Supers? Would be about the same price (maybe less) and are only a few seconds slower than a 2080Ti. Especially if you overclock the Memory and some Fast RAM on that board.
ID: 2006843 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2006847 - Posted: 11 Aug 2019, 21:25:57 UTC

Nvidia SLI or AMD Crossfire is USELESS for crunching. Don't bother unless you need it for gaming. Whether you get a AM4 board or a TR4 board the fastest you will be able to run any card is X8 with more than one card installed. But the PCIe slot bandwidth limitation is meaningless for Seti crunching as proved by all the people running Seti on mining boards with only an X1 slot bandwidth.

With Samsung stopping production of B-dies, it is getting very difficult to source any CL14 memory. The best you can get easily is CL15 memory that is Hynix E-die. Sort of a crapshoot whether it will clock faster, though is easier on Ryzen 3000. Don't worry about whether memory is on a QVL list. That is simply a list of whatever available memory that a vendor had on hand to test with at STOCK settings. They certainly don't test at overclocked settings. And a vendor did not go out and buy every available memory kit to test. Focus on memory that overclocks well and is likely to NOT be on a QVL list somewhere.
G.Skill 3200CL14 kits are still available in the Trident-Z and Flare-X product lines. Those are made with Samsung B-dies. You can always tell B-dies from their primary timings which will always be identical.
As in: 14-14-14-14-34 or 15-15-15-15-35 or 16-16-16-16-36. Those are the desirable kits as they will easily overclock very well past stock XMP settings.

The AM4 socket provides 20 lanes from the cpu and another 4 lanes from the PCH. The bottom slot on the motherboard will be serviced by the 4 lanes from the PCH. Makes no difference where the lanes come from. On a typical AM4 X470 or X570 motherboard with 3 X16 slots, the top and middle slots will run at X8 and the bottom slot will work at X4. Only the new ASUS Pro WS X570-Ace can run X8X8/X8 because they take an additional 4 lanes from the PCH to add to the bottom slot. Only the MSI X570 Godlike has four PCIe X16 slots and will run at X8/X4/X4/X4

A TR4 board with 4 slots occupied will run at X16/X8/X16/X8.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2006847 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2006849 - Posted: 11 Aug 2019, 21:30:00 UTC

I agree on the 2080 Ti. You pay a large difference in price for very little performance gained. You can buy two 2070 Supers for the price of one 2080 Ti. Or two 2080 Supers for an additional 40% over the price of a single 2080 Ti.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2006849 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2006852 - Posted: 11 Aug 2019, 22:05:13 UTC - in response to Message 2006842.  

Wiggo said:
The NVLink bridge is only needed by gamers using SLi and totally useless for crunchers. ;-)

So... the NVLink bridge is useless in crunching but the SLI will work for crunching?
George

ID: 2006852 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2006853 - Posted: 11 Aug 2019, 22:09:36 UTC - in response to Message 2006843.  
Last modified: 11 Aug 2019, 22:11:37 UTC

Zalster said:
So why the 2080Ti? why not 2-2080 Supers? Would be about the same price (maybe less) and are only a few seconds slower than a 2080Ti. Especially if you overclock the Memory and some Fast RAM on that board.

I don't have any intention on doing gaming per se. And if NVLink and SLI are useless for crunching, why the suggestion to get 2x 2070 or 2x 2080? I realize the cost savings.
George

ID: 2006853 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2006855 - Posted: 11 Aug 2019, 22:14:05 UTC - in response to Message 2006852.  
Last modified: 11 Aug 2019, 22:16:08 UTC

A SLI or Crossfire bridge DOES NOT bridge cards together in crunching like it does for gaming. You can install the bridge but each card works by itself for crunching.

Because two gpus are TWICE as productive as a single gpu. Gpu's are more productive than cpu's by a wide margin. You can achieve a higher RAC by adding more gpus to any host. RAC scales with gpu count.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2006855 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2006859 - Posted: 11 Aug 2019, 22:19:10 UTC

Gentlemen, I realize that I am a newbie when it comes to SETI crunching. I hope that my Q's are not bothering anybody. I just want to get the best that I can buy specifically for SETI crunching and it will be the last time I buy a computer. I will also use the computer for some reasonable work such as documents, photos, video rendering and the like. But mainly it will be for crunching.
George

ID: 2006859 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2006860 - Posted: 11 Aug 2019, 22:22:30 UTC - in response to Message 2006855.  
Last modified: 11 Aug 2019, 22:25:29 UTC

So maybe I didn't realize something. I can put 2 GPUs in without doing SLI or NVLink and it will still work? I thought if you put in more than one GPU you had to do an SLI or NVLink just to make it work. This make a BIG difference in my thinking.
George

ID: 2006860 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 38177
Credit: 261,360,520
RAC: 489
Australia
Message 2006863 - Posted: 11 Aug 2019, 22:35:07 UTC
Last modified: 11 Aug 2019, 22:35:34 UTC

As I said before, the link is only good for gamers because it lets the run their games at higher resolutions, quality and frame rates than with a single card (in 95% of cases anyway), but crunchers don't care about that.

No worries about the questions here either. ;-)

Cheers.
ID: 2006863 · Report as offensive
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2006864 - Posted: 11 Aug 2019, 22:38:47 UTC - in response to Message 2006860.  

Nope. You don’t have to use the SLI bridges. The cards are used individually and run their own work units.

SLI is even becoming pointless in gaming. I like the idea of SLI, but very few games actually use it properly. None of the games I play ever supported it. It really only scales well in Benchmarking tests.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2006864 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2006868 - Posted: 11 Aug 2019, 23:00:03 UTC - in response to Message 2006864.  
Last modified: 11 Aug 2019, 23:04:23 UTC

If I ever decide to have more than 2 GPUs is it necessary to have an x16 slot even though it may have only an x8 or x4 connection? What about the x1 slots that Keith referenced? Can they be used for GPUs with a riser card and cable also?

[EDIT] DOH! I just answered my own question regarding the X1 slots. Sorry Keith!
George

ID: 2006868 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2006870 - Posted: 11 Aug 2019, 23:19:16 UTC

Almost ALL gpus have a PHYSICAL X16 card finger. Which requires installing into a PHYSICAL X16 slot. Or a X1,X4 or X8 slot with the end broken open so the card can extend beyond the physical slot length. An X16 slot MAY be wired for all possible X16 electrical connections, but it MAY NOT have all X16 electrical connections and only have X4 or X8 electrical connections.

Look at images of the underneath side of motherboard PCB's. If there are solder pads extending the full length of an X16 slot, then it has all of the X16 electrical connections. But if the solder pads only traverse half the length of the X16 slot, it only has electrical connections for X8 and any card plugged into it can only run at X8 speeds.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2006870 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2006881 - Posted: 12 Aug 2019, 1:03:54 UTC - in response to Message 2006870.  

I'm looking at three EVGA GPUs:

EVGA GeForce RTX 2080 Ti BLACK EDITION GAMING, 11G-P4-2281-KR for $1100 (no rebate) (Limit 2/household)

EVGA GeForce RTX 2080 SUPER BLACK GAMING, 08G-P4-3081-KR for $700 ($20 rebate included) (Limit 2/household)

EVGA GeForce RTX 2070 SUPER BLACK GAMING, 08G-P4-3071-KR for $500 ($20 rebate included) (Limit 2/household)

What does SETI use, or more appropriately, does SETI utilize CUDA Cores in Linux? Yes. Does SETI utilize memory in CUDA calculations? i.e. Total memory x memory clock speed? I think so. (I'm not too concerned with the BOOST clock speed.) Does SETI utilize memory bit width and memory band width? This I don't know.

Could I do this as an example?

2080 Ti
CUDA/4352 x GDDR6/11,264 x Mem Clk Spd/14,000 = 686,292,992,000 for $1,100

2080 Super
CUDA/3072 x GDDR6/8,192 x Mem Clk Spd/15,500 = 390,070,272,000‬ for $700 (2x for $1,400 = 780,140,544,000‬)

2070 Super
CUDA/2560 x GDDR6/8,192 x Mem Clk Spd/14,000 = 293,601,280,000‬ for $500 (2x for $1,000 = 587,202,560,000‬)
________________

2080 Super
For $300 more than a 2080-Ti (a 27.3% increase) I get an increase (In what? I don't know) of 93,847,552,000 (a 13.7% increase)

2070 Super
For $100 less than a 2080-Ti (-$100) (a 9.1% savings) I get a reduction of -99,090,432,000‬ less (In what? I don't know) (a 14.4% less number)
________________

Help me out. Is this a valid way to compare GPUs? I'm at a crossroads.
George

ID: 2006881 · Report as offensive
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2006886 - Posted: 12 Aug 2019, 1:29:19 UTC - in response to Message 2006881.  

2x 2070 Super will be the best bang for buck. Will be more productive than a single 2080ti.

2x 2080S will be faster than 2x2070S, but are 40% more expensive, and I dont think they will be 40% faster (closer to 10% faster)
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2006886 · Report as offensive
Profile George Project Donor
Volunteer tester
Avatar

Send message
Joined: 23 Oct 17
Posts: 222
Credit: 2,597,521
RAC: 13
United States
Message 2006887 - Posted: 12 Aug 2019, 1:39:21 UTC - in response to Message 2006886.  

With my way of calculating, the 2080 Super is 24.7% faster than the 2070 Super.

Food for thought.
George

ID: 2006887 · Report as offensive
1 · 2 · 3 · 4 . . . 6 · Next

Message boards : Number crunching : I've almost been talked into purchasing a Threadripper 2950X CPU... Pro's & Con's?


 
©2025 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.