Log Manager v10
You can use the PEM Log Manager to simplify server log configuration for Postgres instances. With Log Manager, you can modify all of your server log parameters including:
- Where log files are written
- How often log files are written
- The type of information written to log files
- The format of log file entries
- Log rotation properties
Before using Log Manager to define logging properties for a server, you must specify the name of the associated Advanced Server or PostgreSQL database server in the Service ID field on the Advanced tab of the New Server Registration or Properties dialog box. The server is available for configuration on the Server Selection dialog box only if you specify the name of the service in the Service ID field.
For example, suppose you're setting logging preferences for an Advanced Server 9.4 instance that resides on a Linux host. Set the Service ID field on the Advanced tab of the Properties dialog box for the monitored server to ppas-9.4
.
Note
- Log Manager depends on Settings and Server Log Configuration probes to populate all the fields in the wizard. Therefore, ensure that those probes for selected servers are enabled. In addition, set the execution frequency for those probes to a minimum to ensure that Log Manager reflects the latest log configurations.
- Rerun Log Manager if you make any manual changes related to logging in the configuration files, such as
postgresql.conf
. PEM doesn't reflect those changes automatically.
To configure logging for a Postgres instance, you must register the server as a PEM-managed server, and the registration information must include the name of a service script.
To open Log Manager, in the PEM client, select Management > Log Manager. The wizard opens and welcome message appears. Select Next.
The Server Selection dialog box displays a list of the server connections monitored by PEM. Select the check boxes next to the names of servers to which you want the Log Manager wizard to apply the specified configuration. Log Manager is disabled for any server displaying a red exclamation mark to the left of its name in the Server selection tree. A server might not be enabled for several reasons:
Log Manager can configure only a server that specifies a service ID on the Advanced tab of the Properties dialog box. To provide a service ID:
- In the tree, right-click the server name and select Disconnect Server from the context menu.
- If prompted, provide a password.
- From the context menu for the server, select Properties.
- On the Advanced tab, enter the name of the service in the Service ID field.
- Select Save.
If the PEM agent bound to the server doesn't have enough privileges to restart the server, the server is disabled.
If the PEM agent bound to the server is an older version than the associated PEM server, the server is disabled.
Select Next.
Use the Log configuration dialog box to specify how often to import log files to PEM and to specify log rotation details.
Options in the Import Logs box specify how often to import the log files to PEM:
- Use the switch next to Import logs to PEM to specify whether to import the log files to PEM and to display them on the Server Log Analysis dashboard.
- Use the Import Frequency list to specify how often to import log files to PEM.
Use the Log rotation configuration box to specify the maximum length (lifespan or size) of a log file:
- Use the Rotation Size field to specify the maximum size of a log file in megabytes. The default value is 10 MB. Set to 0 if you don't want to place a limit on the maximum size of a log file.
- Use the Rotation Time field to specify the number of whole days to store in each log file. The default value is 1 day.
Use the Truncation on Rotation switch to specify server behavior for time-based log file rotation:
- Select On to overwrite any existing log file that has the same name as a new file.
- Select Off to append any new log file entries to an existing log file with the same name as a new log file. This is the default behavior.
Select Next.
Use the Where to Log dialog box to specify where to write log files:
Select an option from the Log Destination box to specify a destination for the server log output:
- Set the stderr switch to Yes to write log files to stderr.
- Set the csvlog switch to Yes to write log files in a comma-separated value format. This option is enabled and not editable if you select Import logs to PEM on the Schedule dialog box. If you aren't importing server log files to PEM, you can edit this option.
- Set the syslog switch to Yes to write log files to the system log files.
- On Windows, set the eventlog switch to Yes to write log files to the event log.
Use the Log collection box to specify your collection preferences:
- Set the Log Collector switch to Enable to redirect captured log messages (directed to stderr) into log files.
- Set the Log Silent Mode switch to Enable to run the server silently in the background, disassociated from the controlling terminal.
Use the Log Directory box to specify log file location preferences:
- Set the Change log directory for selected servers? switch to Yes to maintain each set of log files in a separate directory.
- Use the Directory name field to specify the directory to which to write log files. The directory resides beneath the
pg_log
directory under the installation directory of the monitored server.
Use the Log File Name field to specify a format for the log file name. If set to Default, the format is
enterprisedb-%Y-%m-%d_%H%M%S
, where:enterprisedb
is the file name prefixY
is the year that the log was storedm
is the month that the log was stored
is the day that the log was storedH
is the hour that the log was storedM
is the minute that the log was storeS
is the second that the log was stored
When logging to syslog is enabled:
- Use the Syslog Facility list to specify the syslog facility to use.
- Use the Syslog Ident field to specify the program name that identifies EDB Postgres Advanced Server entries in system logs.
Select Next.
Use the When to Log dialog box to specify the events that initiate a log file entry.
The severity levels in order of severity, from most severe to least severe, are:
- panic — Errors that cause all database sessions to abort.
- fatal — Errors that cause a session to abort.
- log — Information messages of interest to administrators.
- error — Errors that cause a command to abort.
- warning — Error conditions in which a command completes but might not perform as expected.
- notice — Items of interest to users. This is the default.
- info — Information implicitly requested by the user.
- debug5 through debug1 — Detailed debugging information useful to developers.
Use the Client min messages list to specify the lowest severity level of message sent to the client application.
Use the Log min messages list to specify the lowest severity level to write to the server log.
By default, when an error message is written to the server log, the text of the SQL statement that initiated the log entry isn't included. Use the Log min error statement list to specify a severity level that triggers SQL statement logging. If a message is of the specified severity or higher, the SQL statement that produced the message is written to the server log.
Use the Log min duration statement list to specify a statement duration, in milliseconds. Any statements that exceed the specified number of milliseconds are written to the server log. A value of -1 disables all duration-based logging. A value of 0 logs all statements and their duration.
Use the Log temp files field to specify a file size, in kilobytes. When a temporary file reaches the specified size, it's logged. Specify a value of -1 (the default) to disable this functionality.
Use the Log autoVacuum min duration field to specify a time length, in milliseconds. If auto-vacuuming exceeds the length of time specified, the activity is logged. Specify a value of -1 (the default) to disable this functionality.
Select Next.
Use the What to Log dialog box to specify log entry options that are useful for debugging and auditing.
Use the switches in the Debug options box to include information in the log files related to query execution that might be of interest to a developer:
- Set the Parse tree switch to Yes to include the parse tree in the log file.
- Set the Rewriter output switch to Yes to include query rewriter output in the log file.
- Set the Execution plan switch to Yes to include the execution plan for each executed query in the log file.
When the Indent Debug Options Output in Log switch is set to Yes, the server indents each line that contains a parse tree entry, a query rewriter entry, or query execution plan entry. While indentation makes the resulting log file more readable, it results in a longer log file.
Use the switches in the General Options box to include auditing information in the log file:
- Set the Checkpoints switch to Yes to include checkpoints and restartpoints in the server log.
- Set the Connections switch to Yes to include each attempted connection to the server as well as successfully authenticated connections in the server log.
- Set the Disconnections switch to Yes to include a server log entry for each terminated session that provides the session information and session duration.
- Set the Duration switch to Yes to include the amount of time required to execute each logged statement in the server log.
- Set the Hostname switch to Yes to include both the IP address and host name in each server log entry. By default, only the IP address is logged. This might cause a performance penalty.
- Set the Lock Waits switch to Yes to write a log entry for any session that waits longer than the time specified in the
deadlock_timeout
parameter to acquire a lock. This is useful when trying to determine if lock waits are the cause of poor performance.
Use the Error verbosity list to specify the detail written to each entry in the server log:
- Select default to include the error message, DETAIL, HINT, QUERY, and CONTEXT in each server log entry.
- Select terse to log only the error message.
- Select verbose to include the error message, the DETAIL, HINT, QUERY, and CONTEXT error information, SQLSTATE error code and source code file name, the function name, and the line number that generated the error.
Use the Prefix string field to specify a printf-style string that is written at the beginning of each log file entry. For information about the options supported, see the log_line_prefix documentation in the Postgres core documentation.
Use the Statements list to specify the SQL statements to include in the server log:
- Specify none (the default) to disable logging of SQL statements.
- Specify ddl to instruct to log ddl (data definition language) statements, such as CREATE, ALTER, and DROP.
- Specify mod to log all ddl statements as well as all dml (data modification language) statements, such as INSERT, UPDATE, DELETE, TRUNCATE, and COPY FROM.
- Specify all to log all SQL statements.
Select Next.
Use the Schedule Logging Changes dialog box to specify when logging applies configuration changes:
- Set the Configure logging now switch to Yes to enable your configuration preferences. The server restarts when you complete the Log Manager wizard.
- Set Configure logging now to No to use the Schedule it for some other time calendar selector to specify a convenient time to apply logging configuration preferences and for the server to restart.
When you apply the configuration changes specified by the Log Manager wizard, the server restarts, temporarily interrupting use of the database server for users.
Select Finish to exit the wizard. Either restart the server or schedule the server restart for the time specified on the scheduling dialog box.
Reviewing the Server Log Analysis dashboard
After invoking the Log Manager wizard and importing your log files to PEM, you can use the Server Log Analysis dashboard to review the log files for a selected server. To open the Server Log Analysis dashboard, right-click the name of a monitored server in the PEM client tree and select Dashboards > Server Log Analysis.
The header information on the Server Log Analysis dashboard displays the date and time that the server was started, the date and time that the page was last updated, and the current number of triggered alerts.
Entries in the Server Log table appear in chronological order, with the most recent log entries first. Use the scroll bars to navigate through the log entries or to view columns that are off of the display.
Headings at the top of the server log table identify the information stored in each column. Hover over a column heading to view a tooltip that contains a description of the content of each column.
You can use filtering to limit the number of server log records that are displayed. Select Show Filters to open the filters panel and define a filter. Use the filter definition box to describe the selection criteria for selecting a subset of a report to display:
- Use the From field to specify a starting date for the displayed server log.
- Use the To field to specify an ending date for the displayed server log.
- Enter a role name in the Username field to display transactions performed only by that user.
- Enter a database name in the Database field to limit the displayed records to transactions that were performed against the specified database.
- Use the Command Type field to specify selection criteria for the commands that appear in the filtered report.
After you specify the criteria for filtering the server logs, select Filter to display the filtered server log in the Server Log table.
Postgres Log Analysis Expert
The PEM Log Analysis Expert analyzes the log files of servers that are registered with Postgres Enterprise Manager and produces an analysis of your Postgres cluster's usage based on log file entries. You can use information on the Log Analysis Expert reports to make decisions about optimizing your cluster usage and configuration to improve performance.
Before using the PEM Log Analysis Expert, you must specify a Service ID value on the Advanced tab of the Server Properties dialog box. You must also use the Log Manager wizard to enable log collection by the PEM server.
To open the Postgres Log Analysis Expert wizard, in the PEM client, select Management > Postgres Log Analysis Expert. In the wizard's Welcome screen, select Next.
The wizard's Analyzer selection dialog box displays a list of analyzers from which you can select. Each analyzer generates a corresponding table, chart, or graph that contains information collected from the log files.
Select the check box next to an analyzer to specify for the Log Analysis Expert to prepare the corresponding table, chart, or graph. After making your selections, select Next.
Use the Server selection tree to specify the servers you want the Postgres Log Analysis Expert to analyze. If you select multiple servers, the resulting report contains the corresponding result set for each server in a separate but continuous list. Select Next to continue to the Report options dialog box.
Use the Options section to specify the analysis method and the maximum length of any resulting tables:
- Use the Aggregate method list to select the method used by the Log Analysis Expert to consolidate data for the selected time span:
- SUM calculates a value that is the sum of the collected values for the specified time span.
- AVG calculates a value that is the average of the collected values for the specified time span.
- MAX uses the maximum value that occurs in a specified time span.
- MIN uses the minimum value that occurs in a specified time span.
- Use the Time span field to specify the number of minutes that the analyzer incorporates into each calculation for a point on a graph. For example, if the time span is five minutes, and the aggregate method is AVG, each point on the given graph contains the average value of the activity that occurred in a five-minute time span.
- Use the Rows limit field to specify the maximum number of rows to include in a table.
- Use the Aggregate method list to select the method used by the Log Analysis Expert to consolidate data for the selected time span:
Use the Time Intervals section to specify the time range for the Log Analysis Expert to analyze:
- Set Relative days to Yes to enable the (+/-) From date field and specify the number of days before or after the date and time selected in the From field.
- Use the From field to specify the starting date and time for the analysis.
- Use the To field to specify the ending date and time for the analysis.
- Use the (+/-) From date selector to specify the number of days before or after the From date to include in the analysis.
After you specify the report options, select Next to continue to the Report destination dialog box.
You can select the default option and select Finish to view the Log Analysis Expert report in the PEM client's tabbed browser. Or select Download the report to save a copy of the report to an HTML file for later use.
Reviewing the Postgres Log Analysis Expert report
If you choose to review the report immediately, the Postgres Log Analysis Expert report is displayed in the PEM Client window. The report header displays:
- The date and time that the report was generated
- The time period that the report spans
- The aggregation method specified when defining the report
The name of the server for which information is displayed appears at the start of each section of the report.
The report displays the tables, graphs, and charts selected in the Log Analysis Expert wizard. Select Jump To, located in the lower-right corner of the screen, to navigate to a specific graphic.
If the report contains an analysis of more than one monitored server, charts and tables are displayed in sets. First the graphs, tables, and charts that display statistics for one server appear. Then the graphics for the next server in the report appear.
Summary Statistics table
The Summary Statistics table displays a summary of server activity for the selected server.
Row name | Description |
---|---|
Number of unique queries | Count of unique queries made against the selected server in the specified time period. |
Total queries | Count of queries made against the selected server in the specified time period. |
Total queries duration | Amount of time used to execute queries against the server. |
First query | Time, within the specified time period, that the first query executed against the server. |
Last query | Time, within the specified time period, that the last query executed against the server. |
Queries peak time | Point in time, within the specified time period, that query activity reached its highest level. |
Number of events | Count of log events within the specified time period. |
Number of unique events | Count of unique server events. |
Total number of sessions | Count of the number of sessions recorded in the time period. |
Total duration of sessions | Amount of time that sessions were connected during the specified time period. |
Average sessions duration | Average length of each session. |
Total number of connections | Number of user connections made to the server. |
Total number of databases | Number of databases on the selected server. |
Hourly Statistics table
The Hourly DML Statistics table displays the statistics related to the use of various DML commands (SELECT
, INSERT
, UPDATE
, DELETE
, COPY
, and FETCH
) within a one-hour period. To generate values in the Min Duration(sec), Max Duration(sec), and Avg Duration(sec) columns of this table, you must specify a value greater than or equal to 0 in the log_min_duration_statement
configuration parameter. You can set the parameter either by modifying the postgresql.conf
file with your editor of choice or by specifying a value of 0
or greater in the Log Min Duration Statement field of the Log Manager wizard.
Column name | Description |
---|---|
Time | Start of the one-hour period for which data was analyzed. |
Database | Name of the database in which the specified DML command executed. |
Command Type | DML command type. |
Total Count | Number of times that a command of the specified command type executed during the one-hour period analyzed by the report. |
Min Duration(sec) | Shortest amount of time used by the server to respond to the specified command type, in seconds. |
Max Duration(sec) | Longest amount of time used by the server to respond to the specified command type, in seconds. |
Avg Duration(sec) | Average length of time used by the server when responding to the specified command type, in seconds. |
DML Statistics Timeline section
The DML Statistics Timeline section of the Log Analysis Expert report displays information about DML statement usage.
Graphic | Description |
---|---|
Line graph | Analysis of statement usage during the selected time period. Hover over a specific point to view detailed information about that point on the graph. |
Pie chart | Percent of statement usage of each respective DML statement type during the selected time period. |
DDL Statistics Timeline section
The DDL Statistics Timeline section of the Log Analysis Expert report displays information about DDL statement usage.
Graphic | Description |
---|---|
Line graph | Analysis of statement usage during the selected time period. Hover over a specific point to view detailed information about that point on the graph. |
Pie chart | Percent of statement use of each respective DDL statement type during the selected time period. |
Commit and Rollback Statistics Timeline section
The Commit and Rollback Statistics Timeline section of the Log Analysis Expert report displays information about the COMMIT
, ROLLBACK
, and SAVEPOINT
statements logged during the specified time period.
Graphic | Description |
---|---|
Line graph | Analysis of the commit and rollback activity during the specified time period. Hover over a specific point to view detailed information about that point on the graph. |
Pie chart | Comparative percent of COMMIT , SAVEPOINT , or ROLLBACK statements executed during the specified time period. |
Checkpoint Statistics Timeline section
The Checkpoint Statistics Timeline section of the Log Analysis Expert report displays information about the checkpoint operations logged during the specified time period.
Graphic | Description |
---|---|
Line graph | Analysis of the checkpoint operation activity during the specified time period. Hover over a specific point to view detailed information about that point on the graph. |
Pie chart | Comparative percent of different types of checkpoint activity logged during the specified time period. |
Log Event Statistics table
The Log Event Statistics table lists log entries with a severity level of WARNING, ERROR, FATAL, PANIC, HINT, or CONTEXT. The level of logging detail for error messages is controlled by the log_min_error_statement
parameter. You can set the parameter either by modifying the postgresql.conf
file with your editor of choice or by specifying a value in the Log Min Error Statement field of the Log Manager wizard.
Column name | Description |
---|---|
Error Severity | Severity level of the log entry. |
Message | The log message. |
Total Count | Number of times that the log entry occurred. |
Log Statistics table
The Log Statistics table lists log entries that indicate an operational severity level of LOG, DETAIL, DEBUG, NOTICE, INFO, or STATEMENT. The level of logging detail for informational messages is controlled by the log_min_messages
parameter. You can set the parameter either by modifying the postgresql.conf
file with your editor of choice or by specifying a value in the Log Min Messages field of the Log Manager wizard.
Column name | Description |
---|---|
Error Severity | Severity level of the log entry. |
Total Count | Number of times that the log entry occurred. |
Temp Generated Queries table
The Temp Generated Queries table displays a list of queries that created temporary files.
Column name | Description |
---|---|
Log Time | The time that the log entry was generated. |
TempFile Size(Bytes) | The size of the temporary file in bytes. |
Query | The text of the query that created the temporary file. |
Temp File Statistics Timeline graph
The Temp File Statistics Timeline graph displays the size of temporary files over the specified time period. Hover over a specific point to view detailed information about that point on the graph.
Lock Statistics Timeline section
The Lock Statistics Timeline section of the Log Analysis Expert report displays information about the locks held during the specified time period:
Graphic | Description |
---|---|
Graph | Number of locks held at any given point during the time period. Hover over a specific point to view detailed information about that point on the graph. |
Pie chart | Displays the relative percentage of each type of lock used during the selected time period. |
Waiting Statistics Timeline section
The Waiting Statistics Timeline section of the Log Analysis Expert report displays information about DML statements that are waiting for a lock during the specified time period.
Graphic | Description |
---|---|
Graph | Number of DML statements that are waiting at any given point during the time period. Each colored line represents a statement type. Hover over a specific point to view detailed information about that point on the graph. |
Pie chart | Relative percentage of each type of DML statement that waited for a lock during the selected time period. |
Idle Statistics Timeline section
The Idle Statistics Timeline section of the Log Analysis Expert report displays information about the amount of time that a connection to the server is idle. An IDLE
server is waiting for a connection from a client. A connection that is IDLE in transaction
has started a transaction but hasn't yet committed or rolled back the transaction and is waiting for a command from the client. A session that is IDLE in transaction (aborted)
started a transaction but hasn't yet committed or rolled back the transaction and is waiting for a command from the client. An error has occurred in the transaction, and the transaction can only be rolled back.
Graphic | Description |
---|---|
Graph | Times at which the server is IDLE , IDLE in transaction , and IDLE in transaction (aborted) . Hover over a specific point to view detailed information about that point on the graph. |
Pie chart | Relative percentage of each type of lock used during the selected time period. |
Autovacuum Statistics table
The Autovacuum Statistics table displays statistics about autovacuum activity on monitored servers.
Column name | Description |
---|---|
Log Time | Time that the autovacuum activity was written to the log. |
Relation | Name of the table on which the autovacuum was performed. |
Index Details | Number of index scans that were performed. |
Page Details | Number of pages that were removed and number of pages that remain. |
Tuple Details | Number of tuples that were removed and number of tuples that remain. |
Buffer Usage | Number of buffers hit, missed, or dirty. |
Read Rate | Average read rate in MBs per second. |
System Usage | Percent of CPU time used performing autovacuum activities. |
Autoanalyze Statistics table
The Autoanalyze Statistics table displays logged autoanalyze activity.
Column name | Description |
---|---|
Log Time | Time that the autoanalyze activity was written to the log. |
Relation | Name of the table on which the autoanalyze was performed. |
System Usage | Percent of CPU time used performing autoanalyze activities. |
Slow Query Statistics table
The Slow Query Statistics table displays the slowest queries executed on monitored servers. The table includes the number of entries specified in the Rows Limit field of the Log Analysis Expert.
Column name | Description |
---|---|
Log Time | Time that the query activity was written to the log. |
Tag | Command type. |
Query | Text of the performed query. |
Parameters | Parameters if the query is a parameterized query. |
Duration | Length of time that it took the server to execute the query. |
Host | Name of the host on which the query executed. |
Database | Name of the database on which the query executed. |
Frequently Executed Query Statistics table
The Frequently Executed Query Statistics table displays the most frequently executed query statements. The table includes the number of entries specified in the Rows Limit field of the Log Analysis Expert.
Column name | Description |
---|---|
Query | Text of the performed query. |
Parameters | Parameters if the query is a parameterized query. |
No. of Times Executed | Number of times that the query executed. |
Total Duration | Length of time that it took the server to execute the query. |
Most Time Executed Query Statistics table
The Most Time Executed Query Statistics table displays the queries that took the most execution time on the server. The table includes the number of entries specified in the Rows Limit field of the Log Analysis Expert.
Column name | Description |
---|---|
Query | Text of the performed query. |
Parameters | Parameters if the query is a parameterized query. |
No. of Times Executed | Number of times that the query executed. |
Total Duration | Length of time that it took the server to execute the query. |
Connections Overview Timeline section
The Connections Overview Timeline section of the Log Analysis Expert report displays information about successful and unsuccessful connection attempts during the specified time period.
Graphic | Description |
---|---|
Timestamp graph | Number of server connections attempted and connections authenticated at any given point during the specified time period. Hover over a specific point to view detailed information about that point on the graph. |
Summary pie chart | The relative percentage of connections attempted and connections authenticated during the specified time period. |