This is a tuff question because what I'll say here may be totally wrong.
Timing a query is already hard because of the total workload of the platform varies, this is why we look at IO / CPU / Spool because they aren't dependant of the workload. Let suppose you have a constant workload.
First, try to find the intrinseque difference between your two environements.
Running a non trivial query on both on them on similar volume, similar general workload, if one answers in 10 seconds and the other on in 20 seconds you can suppose the first one is twice faster.
Then about your 1 million rows vs 10 billions rows.
The number of rows in the table doesn't really matter.
What matters is the number of rows spooled and processed.
If you table is, for example, daily partitionned and you work on one day, it's ok - the performance of the query should remain the same over time (assuming volume on a day by day basis is almost constant).
If you have to process all those 10 billions rows, well, it's probably not a good scaling query.