I am using the following code with npgsql 9.0.2 inside a 8 web api to insert records into a table, but I am having some trouble where my database gets completely overloaded with the CPU hovering at 100%, and AWS reporting 100+ sessions being used.
The database is an RDS PostgreSQL 17.2 that runs on a r7g.xlarge instance (4 vcpus, 32 GB RAM)
NpgsqlDataSourceBuilder dataSourceBuilder = new(_application.TransactionalConnectionString);
dataSourceBuilder.MapComposite<EventBaseComposite>("event_type");
await using NpgsqlDataSource dataSource = dataSourceBuilder.Build();
await using NpgsqlConnection connection = await dataSource.OpenConnectionAsync();
await using NpgsqlCommand command1 = new($"SET search_path TO {schema};", connection);
await command1.ExecuteNonQueryAsync();
await using NpgsqlCommand command = new("event_insert_list", connection)
{
CommandType = CommandType.StoredProcedure,
CommandTimeout = 200
};
command.Parameters.Add(new NpgsqlParameter
{
ParameterName = "events",
DataTypeName = "event_type[]",
Value = collection.Select(item => (EventBaseComposite)item).ToList(),
});
return await command.ExecuteNonQueryAsync();
This is the stored procedure used:
WITH inserted_rows AS (
INSERT INTO event (
id, type_id, category, response
)
SELECT id,
type_id,
category,
response
FROM UNNEST(events) AS source (
id, type_id, category, response
)
ON CONFLICT DO NOTHING
RETURNING id, type_id),
Here is my connection string:
"ConnectionString": "Host=hostname; Port=5432; Database=events; Username=events; Password=password"
Am I reading this wrong that there are hundreds of connections being opened? So far I have tried VACUUMING all of the tables, including the ones in pg_catalog, rebuilding all of the indexes and the result is still the same.
I am using the following code with npgsql 9.0.2 inside a 8 web api to insert records into a table, but I am having some trouble where my database gets completely overloaded with the CPU hovering at 100%, and AWS reporting 100+ sessions being used.
The database is an RDS PostgreSQL 17.2 that runs on a r7g.xlarge instance (4 vcpus, 32 GB RAM)
NpgsqlDataSourceBuilder dataSourceBuilder = new(_application.TransactionalConnectionString);
dataSourceBuilder.MapComposite<EventBaseComposite>("event_type");
await using NpgsqlDataSource dataSource = dataSourceBuilder.Build();
await using NpgsqlConnection connection = await dataSource.OpenConnectionAsync();
await using NpgsqlCommand command1 = new($"SET search_path TO {schema};", connection);
await command1.ExecuteNonQueryAsync();
await using NpgsqlCommand command = new("event_insert_list", connection)
{
CommandType = CommandType.StoredProcedure,
CommandTimeout = 200
};
command.Parameters.Add(new NpgsqlParameter
{
ParameterName = "events",
DataTypeName = "event_type[]",
Value = collection.Select(item => (EventBaseComposite)item).ToList(),
});
return await command.ExecuteNonQueryAsync();
This is the stored procedure used:
WITH inserted_rows AS (
INSERT INTO event (
id, type_id, category, response
)
SELECT id,
type_id,
category,
response
FROM UNNEST(events) AS source (
id, type_id, category, response
)
ON CONFLICT DO NOTHING
RETURNING id, type_id),
Here is my connection string:
"ConnectionString": "Host=hostname; Port=5432; Database=events; Username=events; Password=password"
Am I reading this wrong that there are hundreds of connections being opened? So far I have tried VACUUMING all of the tables, including the ones in pg_catalog, rebuilding all of the indexes and the result is still the same.
Share Improve this question asked Jan 29 at 14:01 PaulPaul 894 silver badges14 bronze badges1 Answer
Reset to default 0You haven't specify a pool size in the connection string, so the default of 100 connections is being used.
Try with
"ConnectionString": "Host=hostname; Port=5432; Database=events; Username=events; Password=password; Maximum Pool Size=10"