Please can one help me to write a query to pull data from huge table.
I have a huge table with 1 billions rows. I want to pull 1millions rows like multiple times upto 1 billions.I want dont want to repeat the same data.
I can write sel top 100000 from table but next time i cant use same query to pull other 1m rows. (dont want duplciate rows)
ex: sel top 1000000 from table
sel top 200000 from table
sel top 1000000000 from table - last billions rows. (like for loop )
i have 5 cloumns .
tran cnt .
I am working on POC so for testing purpose i need sql code.
Any ideas please welcome ..thanks in advance .
There's no efficient way to do this unless you can pull data based on partitions or an (almost) gapless sequence, but this will not return exactly 1 million rows.
Why do you need it?
If you can make a copy of the table with a ROW_NUMBER column added, then in theory you could extract row number between X and Y. Or hugely less efficient, if there is some combination of columns that is unique so ORDER BY is guaranteed to always return the same sequence you can recompute ROW_NUMBER for every query. But again, what is the real requirement? Perhaps there is a better way than extracting exactly 1 million rows repeatedly.
Want to bring 1 billions rows of data into R and do some analysis on it.'I cant read 1 billions rows at a time. Need chucks of data .