GTX 1660 thread

Message boards : Number crunching : GTX 1660 thread
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · Next

AuthorMessage
MarkJ Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 08
Posts: 1139
Credit: 80,854,192
RAC: 5
Australia
Message 1982446 - Posted: 27 Feb 2019, 11:01:02 UTC

Got an in-stock notification from EVGA so went to order only to find out they won’t ship to Australia.

The XC and XC Black have shown up in some Australian web sites, but not the XC Ultra that I wanted. Still trying to place an order, even if I have to wait a while for them to arrive. As they say in the adverts “shut up and take my money”
BOINC blog
ID: 1982446 · Report as offensive
Profile Bill Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 30 Nov 05
Posts: 282
Credit: 6,916,194
RAC: 60
United States
Message 1982461 - Posted: 27 Feb 2019, 14:34:04 UTC - in response to Message 1982269.  

Well it has been crunching OK overnight, thought I would do a quick "non scientific" look at the figures.

The 1660 replaced a 1060 and the 970 is in my other machine.

I have done a quick average of the last 20 tasks done by my three cards (not including AP or shorties) and these are the times:

GTX-970 - 6.4 minutes/task

GTX -1060 - 7.3 minutes/task

GTX-1660 - 4.5 minutes/task

*snip*

The interesting thing is the power consumption, 92 watts verses 125 watts.
This is the interesting "non-scientific" data that I am interested in. This implies that the watts per task is significantly lower. I wonder how that compares to the RTX series?
Seti@home classic: 1,456 results, 1.613 years CPU time
ID: 1982461 · Report as offensive
Profile PKII Project Donor
Avatar

Send message
Joined: 28 May 07
Posts: 166
Credit: 2,729,646
RAC: 0
United States
Message 1983392 - Posted: 4 Mar 2019, 15:08:32 UTC - in response to Message 1982461.  

Well it has been crunching OK overnight, thought I would do a quick "non scientific" look at the figures.

The 1660 replaced a 1060 and the 970 is in my other machine.

I have done a quick average of the last 20 tasks done by my three cards (not including AP or shorties) and these are the times:

GTX-970 - 6.4 minutes/task

GTX -1060 - 7.3 minutes/task

GTX-1660 - 4.5 minutes/task

*snip*

The interesting thing is the power consumption, 92 watts verses 125 watts.
This is the interesting "non-scientific" data that I am interested in. This implies that the watts per task is significantly lower. I wonder how that compares to the RTX series?

I was thinking of getting the 1660 but I can live with my rather long times of 14 or so minutes/task with my 1050.
ID: 1983392 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1983394 - Posted: 4 Mar 2019, 15:44:45 UTC - in response to Message 1982269.  
Last modified: 4 Mar 2019, 15:47:21 UTC


GTX-970 - 6.4 minutes/task

GTX -1060 - 7.3 minutes/task

GTX-1660 - 4.5 minutes/task


Interesting, did you try the 1660 in the Linux box too?

Will be interesting to see it's crunching performance when runs the highly optimized Petri builds and compare with others high end GPU's.

Maybe we have a new winner in cost x power x performance.
ID: 1983394 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1983397 - Posted: 4 Mar 2019, 16:25:17 UTC - in response to Message 1983392.  

This is the interesting "non-scientific" data that I am interested in. This implies that the watts per task is significantly lower. I wonder how that compares to the RTX series?

Should be as good or better than the RTX series. It is the same architecture but with the RTX cores disabled so never using any power in them. Or possibly they don't even have the RTX cores physically present at all because they were never laid down in the silicon.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1983397 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14690
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1983400 - Posted: 4 Mar 2019, 16:39:56 UTC - in response to Message 1983397.  

Wikipedia, who seem to keep up with these things pretty well, says of the GeForce 16 series:

This series, while based on the Turing architecture, lacks the Tensor (artificial intelligence) and RT (ray tracing) cores unique to Turing, providing a lower cost alternative to the "RTX" GeForce 20 Series.
Another page says that the transistor count has gone down from 10.8 billion to 6.6 billion.
ID: 1983400 · Report as offensive
Profile Bernie Vine
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 26 May 99
Posts: 9958
Credit: 103,452,613
RAC: 328
United Kingdom
Message 1983426 - Posted: 4 Mar 2019, 20:05:23 UTC
Last modified: 4 Mar 2019, 20:16:28 UTC

I'm getting 11-17mins a wu out of My Asus 970 Turbo w/a 100MHz overclock and the fan at 100%,


Interesting as I am getting 6-8 minutes a wu from my non turbo non overclocked Zotac 970, running one wu at time. (Average of 40 random tasks from my list is 7 minutes)

PS I wonder if there is a better command line for the 1660ti, I am still using the one for my 1060.

Interesting, did you try the 1660 in the Linux box too?


Sorry but the main reason I brought the 1660 was for gaming, and there Windows rules, the Linux box was an experiment which I may expand on later with a purpose built machine firstly for the 1060, but who knows ;-)
ID: 1983426 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1983430 - Posted: 4 Mar 2019, 20:15:46 UTC - in response to Message 1983400.  

Wikipedia, who seem to keep up with these things pretty well, says of the GeForce 16 series:

This series, while based on the Turing architecture, lacks the Tensor (artificial intelligence) and RT (ray tracing) cores unique to Turing, providing a lower cost alternative to the "RTX" GeForce 20 Series.
Another page says that the transistor count has gone down from 10.8 billion to 6.6 billion.

I would say that is a pretty good indication that they created completely new lithography masks for eliminating the Tensor and RT cores instead of just lasering them off.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1983430 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1983445 - Posted: 4 Mar 2019, 21:29:37 UTC - in response to Message 1983405.  

What I'm interested in is the Asus 1660Ti Strix, a Turbo would be nice too...
Vic, I wouldn't waste your time looking at cards. By the time you can afford one they will be worthless paper weights.
ID: 1983445 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1983447 - Posted: 4 Mar 2019, 21:31:57 UTC - in response to Message 1983438.  

I'm not sure why a turbo card is your preference. For just running distributed computing, the stock cards run much faster than published stock speeds just because they are allowed to do so by GPU Boost 3.0. For instance, my stock RTX 2080 has a published boost speed of 1800Mhz. However it runs constantly at 2035Mhz while crunching without me doing anything. No overclocking needed.

The only reason I can think of for purchasing a "turbo" card is if you intend to use it for gaming also where the turbo or boost clocks are rarely achieved or maintained because of the difference running the video cores makes in the card temps.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1983447 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 37769
Credit: 261,360,520
RAC: 489
Australia
Message 1983448 - Posted: 4 Mar 2019, 21:32:41 UTC - in response to Message 1983445.  

What I'm interested in is the Asus 1660Ti Strix, a Turbo would be nice too...
Vic, I wouldn't waste your time looking at cards. By the time you can afford one they will be worthless paper weights.
Just like those GTX580's and water blocks he bought.

Cheers.
ID: 1983448 · Report as offensive
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 1983457 - Posted: 4 Mar 2019, 22:05:28 UTC - in response to Message 1983447.  
Last modified: 4 Mar 2019, 22:05:44 UTC

I'm not sure why a turbo card is your preference. For just running distributed computing, the stock cards run much faster than published stock speeds just because they are allowed to do so by GPU Boost 3.0. For instance, my stock RTX 2080 has a published boost speed of 1800Mhz. However it runs constantly at 2035Mhz while crunching without me doing anything. No overclocking needed.

The only reason I can think of for purchasing a "turbo" card is if you intend to use it for gaming also where the turbo or boost clocks are rarely achieved or maintained because of the difference running the video cores makes in the card temps.


I think he's referring to the ASUS Turbo models, which is a blower card.

maybe he thinks the blower model has some advantage, but it's one of their low end cards with a less than ideal cooling solution in my opinion.

here's an example of their "Turbo" model, in the form of a 1080: ASUS 1080 Turbo
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 1983457 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1983463 - Posted: 4 Mar 2019, 22:20:28 UTC - in response to Message 1983457.  

Well up until I got the 3 1070Ti cards, all my cards were always reference blower models. I liked that the heat was removed out of the case forcefully at the back and didn't add any heat to the case interior which impacts the efficiency of the cpu cooler. The computer with the 1070 Ti cards runs the cpu at a much higher temp than the other computers with blower cards. You need to have very good case ventilation to use the non-reference design fan cards. The reference cards can be used in marginal cases with poor ventilation easier.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1983463 · Report as offensive
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 1983469 - Posted: 4 Mar 2019, 22:42:26 UTC - in response to Message 1983463.  
Last modified: 4 Mar 2019, 22:43:49 UTC

blowers aren't always "reference" (i reserve this word for the manufacturers reference PCB). but most of the time they are. and in the case of the 1660ti, there wont be a "reference" card, because there is no reference, nvidia isn't making one.

a blower isn't as good at removing the heat as a multi-fan card. they ALWAYS run hotter and have lower clocks than a multi-fan card. it takes a ton of fan speed an noise to get temps reasonable and it still will lag behind a multi fan model, and the rear i/o plate of those blower cards is always choked and doesnt have nearly enough opening. for 99% of consumer use cases that have enough extra space to move the heated air away from the card to where case fans can exhaust the hot air, a multifan card will be better.

the only time a blower fan makes sense to me is in a 3U server with card height restrictions and with high pressure/CFM fans that can force air through it, in a datacenter with no noise concerns.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 1983469 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13915
Credit: 208,696,464
RAC: 304
Australia
Message 1983531 - Posted: 5 Mar 2019, 7:59:03 UTC - in response to Message 1983426.  

PS I wonder if there is a better command line for the 1660ti, I am still using the one for my 1060.

I would try lower values for period_iterations_num
I'd suggest seeing how 15 or even 10 goes. The lower the number, the better the performance- however the more likely (and the greater the effect) of system/screen/ keyboard responsiveness degredation is.
Try a value for 30 min or so, if it's good, try a lower value for a while. If you notice the system becomes a bit less responsive, bump the value back up by 2 or so & see how that goes. The more powerful the card, the lower the value can be before any system response impact will be felt.
Grant
Darwin NT
ID: 1983531 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14690
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1983634 - Posted: 6 Mar 2019, 13:34:57 UTC

Could somebody with a GTX 1660 Ti please check the start-up values reported by BOINC (latest version 7.14.2 only - we know the older ones will be wrong) and compare with these values from SIV? GFlops Peak is the key value.

ID: 1983634 · Report as offensive
Profile -= Vyper =-
Volunteer tester
Avatar

Send message
Joined: 5 Sep 99
Posts: 1652
Credit: 1,065,191,981
RAC: 2,537
Sweden
Message 1983651 - Posted: 6 Mar 2019, 14:32:57 UTC - in response to Message 1983634.  

Could somebody with a GTX 1660 Ti please check the start-up values reported by BOINC (latest version 7.14.2 only - we know the older ones will be wrong) and compare with these values from SIV? GFlops Peak is the key value.



GeForce GTX 1660 Ti (driver version 418.43, device version OpenCL 1.2 CUDA, 5915MB, 3972MB available, 5484 GFLOPS peak)

_________________________________________________________________________
Addicted to SETI crunching!
Founder of GPU Users Group
ID: 1983651 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14690
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1983662 - Posted: 6 Mar 2019, 15:10:06 UTC - in response to Message 1983651.  

Thanks - that will be within the variation limits of different cards with different levels of overclocking. So we don't need to re-code the BOINC detection routine again.

All I needed to know.
ID: 1983662 · Report as offensive
Profile Bernie Vine
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 26 May 99
Posts: 9958
Credit: 103,452,613
RAC: 328
United Kingdom
Message 1983673 - Posted: 6 Mar 2019, 21:10:17 UTC
Last modified: 6 Mar 2019, 21:10:51 UTC

Just for comparison

Here's mine


CUDA: NVIDIA GPU 0: GeForce GTX 1660 Ti (driver version 419.17, CUDA version 10.1, compute capability 7.5, 4096MB, 3556MB available, 5484 GFLOPS peak)

OpenCL: NVIDIA GPU 0: GeForce GTX 1660 Ti (driver version 419.17, device version OpenCL 1.2 CUDA, 6144MB, 3556MB available, 5484 GFLOPS peak)
ID: 1983673 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1985204 - Posted: 15 Mar 2019, 1:03:37 UTC

The Nvidia GTX 1660 launches today and has a MSRP of $220 coming in $60 cheaper than the Ti variant. Benchmarks show it to be 15% faster than the GTX 1060 6GB variant. This should force any remaining GTX 1060 cards in inventory to bargain basement prices. Might be some very nice deals showing up soon on older inventory.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1985204 · Report as offensive
Previous · 1 · 2 · 3 · Next

Message boards : Number crunching : GTX 1660 thread


 
©2025 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.