I'm looking for ways that I can reduce the time to run select statements in my data warehouse.
We are currently running Postgres Enterprise 188.8.131.52 with the intention of upgrading to 9.6 within the new few months.
There is a Fact table with about 95 million rows with b-tree indexes on all of the foreign key / id columns. The foreign key columns are a mix of smallint / integer data types. They are all single column indexes. There are also some measures such as dollar amounts that we use for aggregation (sum / avg). These fields are NOT indexed. The table is updated on a daily basis using JDBD insert/update statements via Pentaho. The table is also partitioned by activity_date_key.
Running "select sum(amount) from table where date between 20160701 and 20160801" runs in 0.2 seconds but if I increase that timeframe to be between 20160101 and 20160801 the run time jumps up to 70 seconds. (The date field is of type integer).
I'm looking for some ideas on what I can do to reduce this time. Possibly different types of indexes? I read that 9.6 comes with BRIN indexes (block range indexes) but not sure if that will help me. Are there any database config parameters that I can safely tweak? Maybe my problem is just too much data in general? Any tips are welcome. And let me know if you needed any more info on my environment. Thank You.