Trending

#Postgresql

Latest posts tagged with #Postgresql on Bluesky

Posts tagged #Postgresql

Preview
Migrating from Oracle to PostgreSQL: Strategy, Tools, and Best Practices, Wed, Apr 15, 2026, 12:00 PM | Meetup Migrating from Oracle to PostgreSQL has become a key modernization strategy for organizations seeking cost optimization, open-source flexibility, and cloud readiness. Howev

The next Postgres for All is a week from today! Join us online for “Migrating from Oracle to PostgreSQL: Strategy, Tools, and Best Practices” with Y V Ravi Kumar, Phani Kadambari and Arun Kumar Samayam!

www.meetup.com/postgres-mee...

#PostgreSQL #postgres

1 0 1 0
Preview
Registration – PgDay Boston 2026 Register now for PgDay Boston 2026!

Early Bird registration for PgDay Boston is open until April 17th! Register today and get a nice discount!

2026.pgdayboston.org/registration/

#PostgreSQL #postgres #conference

0 0 0 0
Preview
How to Install and Run ArchiveBox on Ubuntu VPS Server in 5 Minutes (Quick Start Guide)

How to Install and Run ArchiveBox on Ubuntu VPS Server in 5 Minutes (Quick Start Guide)
#archivebox #installguide #nodejs #npm #opensource #postgresql #python #selfhosted #selfhosting #ubuntu #vps #vpsguide #Cloud #Guides #VPS

1 0 0 0
Preview
PGConf.dev 2026 PGConf.dev 2026 will be held on May 19–22 in Vancouver, BC to bring users, developers, and organizers together to advance PostgreSQL development.

Stacey Haysler will be adding her perspective as a community organizer to the panel “Beyond the Source: The Human Architecture of PostgreSQL” at PGConf.dev! We hope you’ll join us on Tuesday, May 19th!

2026.pgconf.dev/session/558

#PostgreSQL #postgres #conference

0 0 0 0
Preview
Volunteer Introduction Thank you for your interest in volunteering with PgUS! Please complete the requested fields, and we'll be in touch soon! For Committee roles, please note the following: *Each Committee meets once per…

Interested in volunteering for PG Summit US? We have multiple volunteer opportunities available. Fill out the form below and we’ll be in touch!

bit.ly/pgusvolunteer

@PostgreSQL #PostgreSQL #postgres

0 0 0 0
Preview
PgUS - Become a Member The United States PostgreSQL Association affectionately known as PgUS is a IRS 501(c)(3) public charity. Our purpose is to support the growth and education about the world's most advanced open source…

Your membership with PgUS allows you to vote in elections, run for the Board, and help shape the future of our community! Join today!

postgresql.us/becomeamember/

#PostgreSQL #postgres

0 0 0 0
Preview
PgDay Boston 2026 A 1-day PostgreSQL community conference in Boston, US.

PgDay Boston will be held at the Museum of Science! We think this will be a fantastic setting for a conference and we hope you’ll agree!

2026.pgdayboston.org/venue/

#PostgreSQL #postgres #conference

0 0 0 0
Preview
In the world of PostgreSQL performance tuning, **`work_mem`** is one of the most frequently misunderstood parameters. Many administrators assume it represents a total memory limit for a database session or a single query, but in reality, its impact is much more granular and potentially explosive. Understanding how this setting scales with complex SQL statements is the key to balancing high-speed execution with system stability. * * * Table of Contents Toggle * What is work_mem? * The Hidden Multiplier: Multiple Operations * Amplification through Parallelism * Practical Examples of Memory Consumption * How to Audit and Adjust * Summary Table: Memory Limits by Operation #### **What is`work_mem`?** The **`work_mem`** parameter sets the **base maximum amount of memory** that an individual query operation—such as a sort or a hash table—can consume before PostgreSQL is forced to write data to **temporary disk files**. While writing to disk (often called “spilling”) prevents the system from crashing due to memory exhaustion, it is significantly slower than processing data in RAM. #### **The Hidden Multiplier: Multiple Operations** The most critical takeaway is that `work_mem` is **not a per-query limit**. A single complex query often involves multiple operations simultaneously, such as several joins and a final sort. Each of these nodes in the execution plan is allowed to use the full `work_mem` allocation. Consequently, a single connection could easily consume **many times** the value of `work_mem` at its peak. #### **Amplification through Parallelism** When PostgreSQL utilizes **parallel query** , the memory demand scales even further. Resource limits like `work_mem` are applied **individually to each worker process**. For example, a query using four background workers and one leader process could theoretically utilize **five times** the memory of a serial query, as each process manages its own set of sorts and hashes. * * * #### **Practical Examples of Memory Consumption** **Example 1: The Multi-Join and Sort Scenario** Consider a query that joins three large tables and sorts the final result: SELECT * FROM orders o JOIN customers c ON o.customer_id = c.id JOIN products p ON o.product_id = p.id ORDER BY o.order_date; If the execution plan for this query uses two **Hash Joins** and one **Sort** node, and `work_mem` is set to 64MB: * **Hash Join 1:** Up to 128MB (64MB * `hash_mem_multiplier` of 2.0). * **Hash Join 2:** Up to 128MB. * **Sort Node:** Up to 64MB. * **Total Peak RAM:** This single query could consume **320MB** of local memory. **Example 2: Parallel Query Impact** If you run a heavy aggregation on a partitioned table using four parallel workers: SELECT category, SUM(sales) FROM large_sales_table GROUP BY category; If this plan involves a **Parallel Hash Aggregate** , every worker process and the leader can allocate their own memory for the hash table. If `work_mem` is 100MB, the system might allocate roughly **500MB** (1 leader + 4 workers) to satisfy this single statement. * * * #### **How to Audit and Adjust** To determine if your `work_mem` is set correctly, you should use **`EXPLAIN ANALYZE`** on your most complex queries. 1. **Check for Disk Spilling:** Look for the phrase **“External sort Disk”** or **“Batches”** in the output. If these appear, the operation exceeded your `work_mem` and was forced to use slow disk I/O. 2. **Verify Memory Usage:** The sort node will explicitly state the **Memory used** (e.g., “Memory: 74kB”) if the operation completed entirely in RAM. 3. **Real-Time Diagnostics:** If a connection seems to be hogging RAM, you can use the function **`pg_log_backend_memory_contexts(PID)`** to dump a detailed breakdown of that process’s memory usage into the PostgreSQL server logs. #### **Summary Table: Memory Limits by Operation** Operation Type| Memory Limit Formula| Key Configuration ---|---|--- **Standard Sort** (`ORDER BY`, `DISTINCT`)| `work_mem`| **Hash-based** (Hash Join, Hash Agg)| `work_mem * hash_mem_multiplier`| **Parallel Query**| `(Workers + 1) * Operation Limit`| **Maintenance** (`CREATE INDEX`, `VACUUM`)| `maintenance_work_mem`| **Final Recommendation:** Be conservative with the global `work_mem` setting to avoid the Linux **OOM (Out of Memory) Killer**. If specific reporting queries require more RAM, it is safer to increase the limit for just that session using **`SET work_mem = '256MB';`** rather than changing it for the entire cluster. See also: Mastering the Linux Command Line — Your Complete Free Training Guide

The Multiplier Effect: Mastering PostgreSQL’s work_mem for Complex Queries In the world of PostgreSQL performance tuning, work_mem is one of the most frequently misunderstood parameters. Many adm...

#Postgresql

Origin | Interest | Match

0 0 0 0
Preview
Imagine you are managing a multi-terabyte database, but only a fraction of that data is actually powering your application’s real-time performance. Without knowing which tables and **indexes** are “hot”—meaning frequently accessed and ideally stored in memory—you are essentially flying blind when it comes to provisioning **RAM** or optimizing queries. You might be suffering from slow response times because critical data is being “washed out” of the **shared buffers** by large, infrequent sequential scans, yet you lack the visibility to pinpoint the exact cause. Database administrators often face scenarios where adding more memory doesn’t seem to improve performance, or they need to decide which tables to move to faster **NVMe storage**. Identifying **hot data** allows you to focus your tuning efforts on the objects that matter most to your users. By leveraging PostgreSQL’s internal metadata and specific extensions, you can gain a granular view of your database’s “**Working Set** ” and ensure that your most important data remains pinned in memory for lightning-fast access. * * * Table of Contents Toggle * Key Takeaways for Identifying Hot Data * Method 1: Using pg_buffercache for Real-Time Heat Analysis * Method 2: Monitoring Cumulative Statistics (Scans and I/O) * Method 3: Inspecting Query Behavior with EXPLAIN (ANALYZE, BUFFERS) * Step-by-Step Process to Identify Your Working Set * Summary of Hot Data Monitoring Tools * FAQs * Related Posts #### **Key Takeaways for Identifying Hot Data** * **Shared Buffers Cache** → This is the primary memory area where Postgres stores data pages to avoid expensive disk I/O. * **Usage Count** → A metric ranging from **0 to 5** that indicates how frequently a specific data page is accessed; a 5 is considered “very hot”. * **Cache Hit Ratio** → A vital health metric (ideally **> 90%**) that measures how often the engine finds data in RAM versus reading it from the disk. * **Index Statistics** → System views like **`pg_stat_user_indexes`** help identify which indexes are being scanned the most. * **Buffer Context** → Using the **`pg_buffercache`** extension allows for a real-time “x-ray” of what is currently occupying your RAM. * * * #### **Method 1: Using pg_buffercache for Real-Time Heat Analysis** The most direct way to find hot data is through the **`pg_buffercache`** extension. This module provides a view that shows exactly which tables and indexes are in the **shared buffers** and how “hot” they are based on their access frequency. **Command to Install:** CREATE EXTENSION pg_buffercache; **SQL to View Overall Cache Heat:** This query groups your memory usage by the **`usagecount`** (0-5), showing how much of your cache is truly “hot.” SELECT usagecount, count(*) AS blocks, pg_size_pretty(count(*) * 8192) AS size FROM pg_buffercache GROUP BY usagecount ORDER BY usagecount DESC; **Output Example:** usagecount | blocks | size ------------+--------+-------- 5 | 12800 | 100 MB 1 | 2560 | 20 MB 0 | 1024 | 8 MB **Analysis:** If you see a large volume of data with a **`usagecount`** of 5, those objects are your most critical hot data. * * * #### **Method 2: Monitoring Cumulative Statistics (Scans and I/O)** PostgreSQL maintains counters for every table and index. By checking which objects have high **`idx_scan`** counts but low **`heap_blks_read`** , you can identify data that is frequently accessed and successfully cached. **Command to Find Frequently Scanned Tables:** SELECT relname, seq_scan, idx_scan, n_live_tup FROM pg_stat_user_tables ORDER BY idx_scan DESC LIMIT 10; **Understanding the Metrics:** See also: Mastering the Linux Command Line — Your Complete Free Training Guide * **`seq_scan`** : High numbers here on large tables can “wash out” your cache. * **`idx_scan`** : High numbers indicate an object is part of the **hot data** set. * **`blks_hit` vs `blks_read`**: In the **`pg_statio_user_tables`** view, a high `blks_hit` indicates the data is staying in RAM. * * * #### **Method 3: Inspecting Query Behavior with EXPLAIN (ANALYZE, BUFFERS)** When you suspect a specific query is dealing with hot or cold data, you can use the **`BUFFERS`** option of **`EXPLAIN`**. This shows exactly how many blocks were found in the cache versus how many had to be read from the disk. **Command Example:** EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM customer WHERE id = 101; **Output Details:** Look for the **“Buffers”** line in the output: `Buffers: shared hit=3 read=1` This tells you that 3 blocks were already “hot” in memory, while 1 block was “cold” and required a disk read. * * * #### **Step-by-Step Process to Identify Your Working Set** 1. **Enable Statistics** : Ensure **`track_counts`** and **`track_io_timing`** are enabled in your configuration to get accurate I/O data. 2. **Calculate Global Hit Ratio** : Run a query against **`pg_stat_database`** to see if your overall **shared_buffers** are large enough for your workload. 3. **Identify Top Objects** : Use the queries from Method 2 to list the top 10 most scanned tables and indexes. 4. **Install pg_buffercache** : Execute `CREATE EXTENSION pg_buffercache;` to enable deep inspection. 5. **Verify Object Heat** : Run a join between **`pg_buffercache`** and **`pg_class`** to see which specific tables have the most blocks with a **`usagecount`** of 5. 6. **Optimize** : For “hot” tables with low cache hit ratios, consider increasing **`shared_buffers`** or creating **indexes** to prevent cache-clearing sequential scans. * * * #### **Summary of Hot Data Monitoring Tools** View/Extension| Key Columns| Primary Purpose ---|---|--- **`pg_buffercache`**| `usagecount`, `relfilenode`| Seeing exactly what is in RAM right now. **`pg_stat_user_tables`**| `idx_scan`, `seq_scan`| Tracking access frequency over time. **`pg_statio_user_tables`**| `heap_blks_hit`, `heap_blks_read`| Measuring cache effectiveness for specific tables. **`EXPLAIN ANALYZE`**| `shared hit`, `shared read`| Auditing the memory impact of a single query. * * * #### **FAQs** **What does a`usagecount` of 5 actually mean?** In the Postgres buffer management algorithm, each time a page is accessed, its count increases (up to 5). A count of 5 means the page is heavily used and is at the back of the “eviction line” [152, conversation history]. **Why is my cache hit ratio high, but the database is still slow?** A high hit ratio in **`pg_stat_database`** only measures the Postgres cache. You may still be experiencing latency if the **OS page cache** is struggling or if you have heavy lock contention. **Can I manually “warm up” my hot data after a restart?** Yes. You can use the **`pg_prewarm`** extension to manually load specific tables or indexes into the **shared buffers**. * * * #### **Related Posts** * **Finding your tables in PostgreSQL** * **Essential PostgreSQL configuration parameters for better performance** * **Troubleshooting slow queries in PostgreSQL: A step-by-step guide** * **How EXPLAIN and EXPLAIN ANALYZE Improve PostgreSQL Performance**

How to Find and Analyze Hot Data in PostgreSQL Imagine you are managing a multi-terabyte database, but only a fraction of that data is actually powering your application’s real-time performance. ...

#Postgresql

Origin | Interest | Match

0 0 0 0

#Directus layers a blazingly fast #NodeJS API on top of any existing SQL database. No schema changes needed, works with what you already have.

🗄️ Database Freedom:
#PostgreSQL, #MySQL, #SQLite, #MariaDB, MS-SQL, #CockroachDB & #OracleDB — you choose, Directus connects.

0 0 1 0
Preview
What's new with Postgres at Microsoft, 2025 edition | Microsoft Community Hub New edition of a popular post that shares Postgres work at Microsoft from the last year in open source and in Azure Database for PostgreSQL.

We’re excited to announce Microsoft as a Gold Sponsor for Pg Summit US!

aka.ms/pg-at-micros...

#PostgreSQL #postgres #sponsor

1 1 0 0

🛡️ Soberanía de datos empresariales: más cerca con PostgreSQL en Kubernetes

thenewstack.io/sovereign-postgresql-kub...

#PostgreSQL #Kubernetes #SoberaníaDigital #CloudNative

0 0 0 0
Preview
True enterprise sovereignty is more approachable than ever, thanks to K8s-powered cloud-neutral PostgreSQL EDB's Gabriele Bartolini explains how Kubernetes-powered PostgreSQL enables sovereign DBaaS, giving enterprises cloud-neutral portability and bare-metal speed.

True sovereignty starts with the database. If your #PostgreSQL isn’t portable across environments, you don’t really control your stack.

thenewstack.io/sovereign-postgresql-kub...

0 0 0 0
Post image

EXPLAIN ANALYZE в PostgreSQL: читаем планы выполнения экспертно Привет, Хабр! Запрос работает 30 секунд. Вы смотрите на не...

#explain #psql #PostgreSQL #план #выполнения #оптимизация #запросов #индексы #PostgreSQL #производительность #БД

Origin | Interest | Match

0 0 0 0
Preview
"Databases in the Agent Era" with Monica Sarbu, Tue, Apr 14, 2026, 12:00 PM | Meetup Join us virtually on Tuesday, April 14th for "Databases in the Agent Era" with Monica Sarbu. We are entering a new era where AI agents are first-class users of data infras

Join us at noon PDT one week from today for "Databases in the Agent Era" with Monica Sarbu! RSVP now!

www.meetup.com/postgresql-1...

@PostgreSQL #PostgreSQL #postgres #meetup

2 2 0 0
Original post on hachyderm.io

New blog from the Data team at @ubuntu!

"Seamless PostgreSQL Deployment on RISC-V with Juju and Ubuntu" is an article covering the various ways to get #PostgreSQL up and running on Ubuntu on RISC-V, including across multiple cloud providers […]

1 0 0 0
Preview
PgDay Boston 2026 Sponsors We are proud to be associated with these fine sponsors, without whom PgDay Boston 2026 would not be possible!

By sponsoring PgDay Boston, your company can be a part of building the local PostgreSQL community! Check out our sponsorship opportunities today!

2026.pgdayboston.org/sponsors/

#PostgreSQL #postgres #conference

3 3 0 0
Preview
How to Install and Deploy FusionPBX on Debian VPS

How to Install and Deploy FusionPBX on Debian VPS
#certbot #debian #freeswitch #fusionpbx #installguide #letsencrypt #nginx #opensource #pbx #php #postgresql #selfhosted #selfhosting #ufw #voip #vps #Cloud #Guides #VPS
blog.radwebhosting.com/deploy-fusio...

1 0 0 0
Preview
An On-Call PostgreSQL Expert | PGX: The PostgreSQL Experts™ PGX is an independent PostgreSQL consultancy, providing 24x7 support and consulting for installations of all sizes.

PGXpertise™ lets you have a world-leading PostgreSQL Expert on call. This is a monthly retainer service that provides a block of consulting hours that can be used for any service PGX provides, at a significant discount. Contact us today!

pgexperts.com/services/pgx...

#PostgreSQL #postgres

1 0 0 0
Post image

🐧 PostgreSQL GUI Clients for Ubuntu Linux (2026)

👀 Explore the most efficient and widely used #PostgreSQL tools: is.gd/6Yv2xU

✅ Try the AI-powered #dbForgeStudio for PostgreSQL and see how it helps create and manage databases efficiently: is.gd/dgBG1B

#PostgreSQLTools

0 0 0 0
Preview
How to Install and Run PortNote on Debian VPS

How to Install and Run PortNote on Debian VPS
#certbot #debian #docker #letsencrypt #nginx #opensource #portnote #postgresql #selfhosted #selfhosting #vps #Cloud #Guides #VPS

1 0 0 0
Preview
PostgreSQL | Meetup Pro PostgreSQL is on Meetup Pro with more than 12525 members across 21 Meetups worldwide. Meetup Pro is the professional tool for organizing and communicating a network of users, partners, contributors…

Our affiliated PUGs are looking for speakers! If you're local to one of the PUGs listed here: www.meetup.com/pro/postgres... and would like to present, contact us and we can put you in touch with the organizers!

buff.ly/4gfy2M3

#PostgreSQL #postgres

2 2 0 0
Preview
PostgreSQL 18 Cut My GIN Index Build from Months to Hours A PostgreSQL 17 GIN index build on 91 million audio fingerprints was heading toward a 118-day worst case. PostgreSQL 18's parallel GIN builds and saner tuning cut it to roughly 10-15 hours.

I just published a ridiculous #PostgreSQL story: one GIN index on 91 million audio fingerprints looked like it might take 118 days. PostgreSQL 18 and saner tuning dragged it back to 10-15 hours. If you enjoy database horror stories with a happy ending: www.attilagyorffy.com/blog/postgre...

0 0 0 0
How moving one word can speed up a query 10–50x #best-practice #database #pattern #postgresql #reading-list #sql

One SQL rewrite, 32x faster: moving NOT EXISTS to query the minority (deleted rows) instead of EXISTS on the majority beats it dramatically. No schema changes needed.

#best-practice #database #pattern #postgresql #reading-list #sql

2 0 0 0

We'll be in Bologna this May, and we'll bring CloudNativePG pins! #PostgreSQL #Kubernetes

0 0 0 0
Preview
Registration – PgDay Boston 2026 Register now for PgDay Boston 2026!

Early Bird tickets are still available, but don’t wait! Get your ticket to PgDay Boston today!

2026.pgdayboston.org/registration/

#PostgreSQL #postgres #conference

3 3 0 0
Post image

Na, wer kennt es? ST_Letters, ein Text2Multipolygon / Well, who’s familiar with it? ST_Letters, a Text2Multipolygon geoobserver.de/2026/04/07/n... #PostGIS #PostgreSQL #gistribe #gischat #fossgis #foss4g #OSGeo #spatial #geospatial #opensource #mapping #gis #geo #geoObserver pls RT

1 0 0 0
Post image

SPQR в финтехе: реальная миграция на шардированную PostgreSQL-инсталляцию На связи Денис Волков из команды платф...

#postgresql #шардирование #горизонтальное #масштабирование #spqr

Origin | Interest | Match

1 0 0 0
Post image

Issue 126 is all about locking down admin-only properties with both security and reusability in mind alongside adding #PostgreSQL support to #Umbraco! Check out Bernadet and Dirk's fantastic articles as well as some umbazing packages and more.

skrift.io/126

#oss #opensource #dotnet

3 1 0 0
Preview
Enterprise Level QR-Based Room Service App for Luxury Hotels API Development & Node.js Projects for ₹12500-37500 INR. TITLE: Full-Stack Developer Needed — Enterprise QR-Based Room Dining & Ordering Platform for Hospitality --- OV



#API #Development #CI/CD #Docker #Full #Stack #Development #Next.js #Node.js #PostgreSQL #RESTful

Origin | Interest | Match

0 0 0 0