Discussion:
BUG #8013: Memory leak
(too old to reply)
s***@comcast.net
2013-03-30 14:01:27 UTC
Permalink
The following bug has been logged on the website:

Bug reference: 8013
Logged by: Rae Stiening
Email address: ***@comcast.net
PostgreSQL version: 9.2.3
Operating system: Suse Linux 9.3
Description:

The query:
SELECT pts_key,count(*)
FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=1 ORDER BY
pts_key

Which is executed as:
GroupAggregate (cost=108680937.80..119278286.60 rows=470993280 width=4)
Filter: (count(*) <> 1)
-> Sort (cost=108680937.80..109858421.00 rows=470993280 width=4)
Sort Key: pts_key
-> Seq Scan on tm_tm_pairs (cost=0.00..8634876.80 rows=470993280
width=4)

uses all available memory (32GB). pts_key is an integer and the table
contains about 500 million rows.

PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (SUSE Linux)
4.7.2 20130108 [gcc-4_7-branch revision 195012], 64-bit
(

compile options:

env CFLAGS='-O3 -march=native' ./configure --with-segsize=128
--
Sent via pgsql-bugs mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
ajmcello
2013-03-31 03:41:14 UTC
Permalink
unsubscribe
Post by s***@comcast.net
Bug reference: 8013
Logged by: Rae Stiening
PostgreSQL version: 9.2.3
Operating system: Suse Linux 9.3
SELECT pts_key,count(*)
FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=1 ORDER BY
pts_key
GroupAggregate (cost=108680937.80..119278286.60 rows=470993280 width=4)
Filter: (count(*) <> 1)
-> Sort (cost=108680937.80..109858421.00 rows=470993280 width=4)
Sort Key: pts_key
-> Seq Scan on tm_tm_pairs (cost=0.00..8634876.80 rows=470993280
width=4)
uses all available memory (32GB). pts_key is an integer and the table
contains about 500 million rows.
PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (SUSE Linux)
4.7.2 20130108 [gcc-4_7-branch revision 195012], 64-bit
(
env CFLAGS='-O3 -march=native' ./configure --with-segsize=128
--
http://www.postgresql.org/mailpref/pgsql-bugs
Daniel Farina
2013-03-31 18:20:18 UTC
Permalink
Post by ajmcello
unsubscribe
That's not how you unsubscribe from this list; you'll want to do that here:

http://www.postgresql.org/community/lists/subscribe/
--
fdr
--
Sent via pgsql-bugs mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
Jeff Lake
2013-03-31 03:42:29 UTC
Permalink
memory leak with 500 Million rows ??
sounds like to big of a db

-----------------------------------------------------
-Jeff Lake K8JSL
MichiganWxSystem.com
AllisonHouse.com
TheWeatherCenter.net
GRLevelXStuff.com
Post by s***@comcast.net
Bug reference: 8013
Logged by: Rae Stiening
PostgreSQL version: 9.2.3
Operating system: Suse Linux 9.3
SELECT pts_key,count(*)
FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=1 ORDER BY
pts_key
GroupAggregate (cost=108680937.80..119278286.60 rows=470993280 width=4)
Filter: (count(*) <> 1)
-> Sort (cost=108680937.80..109858421.00 rows=470993280 width=4)
Sort Key: pts_key
-> Seq Scan on tm_tm_pairs (cost=0.00..8634876.80 rows=470993280
width=4)
uses all available memory (32GB). pts_key is an integer and the table
contains about 500 million rows.
PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (SUSE Linux)
4.7.2 20130108 [gcc-4_7-branch revision 195012], 64-bit
(
env CFLAGS='-O3 -march=native' ./configure --with-segsize=128
--
Sent via pgsql-bugs mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
ajmcello
2013-03-31 16:44:49 UTC
Permalink
unsubscribe
Post by Jeff Lake
memory leak with 500 Million rows ??
sounds like to big of a db
------------------------------**-----------------------
-Jeff Lake K8JSL
MichiganWxSystem.com
AllisonHouse.com
TheWeatherCenter.net
GRLevelXStuff.com
Post by s***@comcast.net
Bug reference: 8013
Logged by: Rae Stiening
PostgreSQL version: 9.2.3
Operating system: Suse Linux 9.3
SELECT pts_key,count(*)
FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=1 ORDER BY
pts_key
GroupAggregate (cost=108680937.80..119278286.**60 rows=470993280
width=4)
Filter: (count(*) <> 1)
-> Sort (cost=108680937.80..109858421.**00 rows=470993280 width=4)
Sort Key: pts_key
-> Seq Scan on tm_tm_pairs (cost=0.00..8634876.80
rows=470993280
width=4)
uses all available memory (32GB). pts_key is an integer and the table
contains about 500 million rows.
PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (SUSE Linux)
4.7.2 20130108 [gcc-4_7-branch revision 195012], 64-bit
(
env CFLAGS='-O3 -march=native' ./configure --with-segsize=128
--
http://www.postgresql.org/**mailpref/pgsql-bugs<http://www.postgresql.org/mailpref/pgsql-bugs>
Greg Stark
2013-03-31 15:01:21 UTC
Permalink
Post by s***@comcast.net
uses all available memory (32GB). pts_key is an integer and the table
contains about 500 million rows.
Please post the schema definition and all the log messages that occur
from this. If it Postgres runs out memory it should include a dump of
the memory usage.
--
greg
--
Sent via pgsql-bugs mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
Peter Geoghegan
2013-03-31 15:40:03 UTC
Permalink
Post by s***@comcast.net
env CFLAGS='-O3 -march=native' ./configure --with-segsize=128
Why did you build with a segment size of 128GB? Postgres binaries
built with a non-standard segment size are not widely used.
--
Regards,
Peter Geoghegan
--
Sent via pgsql-bugs mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
Tom Lane
2013-03-31 17:16:12 UTC
Permalink
Post by s***@comcast.net
SELECT pts_key,count(*)
FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=1 ORDER BY
pts_key
GroupAggregate (cost=108680937.80..119278286.60 rows=470993280 width=4)
Filter: (count(*) <> 1)
-> Sort (cost=108680937.80..109858421.00 rows=470993280 width=4)
Sort Key: pts_key
-> Seq Scan on tm_tm_pairs (cost=0.00..8634876.80 rows=470993280
width=4)
uses all available memory (32GB). pts_key is an integer and the table
contains about 500 million rows.
That query plan doesn't look like it should produce any undue memory
consumption on the server side. How many distinct values of pts_key are
there, and what are you using to collect the query result client-side?
psql, for instance, would try to absorb the whole query result
in-memory, so there'd be a lot of memory consumed by psql if there are
a lot of pts_key values. (You can set FETCH_COUNT to alleviate that.)

A different line of thought is that you might have set work_mem to
an unreasonably large value --- the sort step will happily try to
consume work_mem worth of memory.

regards, tom lane
--
Sent via pgsql-bugs mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
Jeff Janes
2013-03-31 17:32:17 UTC
Permalink
Post by Tom Lane
A different line of thought is that you might have set work_mem to
an unreasonably large value --- the sort step will happily try to
consume work_mem worth of memory.
I don't think that that can be the problem here, because memtuples can
never be more than 1GB even if work_mem is much larger than that. Even if
his sort is using pass-by-reference (I don't think it would be), they
should be skinny enough that that limitation should prevent it from blowing
out memory.

Cheers,

Jeff
Rae Stiening
2013-03-31 19:59:39 UTC
Permalink
I found that by replacing the postgresql.conf file with the original that is present following an initdb the query ran without a memory problem. I looked at the "bad" configuration file and couldn't see anything wrong with it. I regret that because of a typing error the bad file was accidentally deleted. I have subsequently been unable to reproduce the bad behavior. After editing the original file to be the same as what I had intended for the erased file the query still ran without a problem. Memory usage topped out at about 2.1 GB. Even setting work_mem and maintenance_work_mem to 30000MB did not change the maximum memory usage during the query.

Regards,
Rae Stiening
Post by Tom Lane
Post by s***@comcast.net
SELECT pts_key,count(*)
FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=1 ORDER BY
pts_key
GroupAggregate (cost=108680937.80..119278286.60 rows=470993280 width=4)
Filter: (count(*) <> 1)
-> Sort (cost=108680937.80..109858421.00 rows=470993280 width=4)
Sort Key: pts_key
-> Seq Scan on tm_tm_pairs (cost=0.00..8634876.80 rows=470993280
width=4)
uses all available memory (32GB). pts_key is an integer and the table
contains about 500 million rows.
That query plan doesn't look like it should produce any undue memory
consumption on the server side. How many distinct values of pts_key are
there, and what are you using to collect the query result client-side?
psql, for instance, would try to absorb the whole query result
in-memory, so there'd be a lot of memory consumed by psql if there are
a lot of pts_key values. (You can set FETCH_COUNT to alleviate that.)
A different line of thought is that you might have set work_mem to
an unreasonably large value --- the sort step will happily try to
consume work_mem worth of memory.
regards, tom lane
--
Sent via pgsql-bugs mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
Loading...