1024programmer Blog Slow performance fix oracle alert log v$diag_alert_ext x$diag_alert_ext_allway2’s blog

Slow performance fix oracle alert log v$diag_alert_ext x$diag_alert_ext_allway2’s blog

Slow performance fix oracle alert log v$diag_alert_ext x $diag_alert_ext

v$diag_alert_ext aka x$diag_alert_ext should be useful for querying the oracle alert log. Unfortunately with common conditions in the where clause, queries against it run unusably slow.

This is covered in Mos doc 1684140.1 “Selects from v$diag_alert_ext run slowly with large alert logs“, but that ends with the unhelpful “Solution: There is no solution. em>”. Bug 18643828 is an (unactioned) enhancement request from 2014 to add indexes to x$diag_alert_ext.

But actually there is a solution already – the below script runs fast on all databases I’ve tested it on so far

-- 1) Initial setup, one-off.
 CREATE GLOBAL TEMPORARY TABLE gtt_diag_alert_ext (
      host_id VARCHAR2(67)
    , originating_timestamp TIMESTAMP(9) WITH TIME ZONE
    , message_type VARCHAR2(14)
    , message_level VARCHAR2(9)
    , message_text VARCHAR2(2051)
 ) ON COMMIT DELETE ROWS
 ;
 -- 2) Query script
 TRUNCATE TABLE gtt_diag_alert_ext t ; -- only needed if not connecting as a new session.
 INSERT INTO gtt_diag_alert_ext t ( t.host_id , t.originating_timestamp , t.message_type , t.message_level , t.message_text )
 SELECT d.host_id
      , d. originating_timestamp
      , CASE d. message_type
           WHEN 1 THEN 'Unknown'
           WHEN 2 THEN 'Incident Error'
           WHEN 3 THEN 'Error'
           WHEN 4 THEN 'Warning'
           WHEN 5 THEN 'Notification'
           WHEN 6 THEN 'Trace'
           END AS message_type
      , CASE d. message_level
          WHEN 1 THEN 'Critical'
          WHEN 2 THEN 'Severe'
          WHEN 8 THEN 'Important'
          WHEN 16 THEN 'Normal'
          END AS message_level
      , d. message_text
   FROM v$diag_alert_ext d
  WHERE d. originating_timestamp > SYSTIMESTAMP - 1
 ;
 SELECT t. originating_timestamp
      , t.host_id
      , t. message_type
      , t. message_level
      , t. message_text
   FROM gtt_diag_alert_ext t
  WHERE t.message_text LIKE '%ORA-%'
  ORDER BY t. originating_timestamp DESC
 ;

A real table could be used instead of a global temporary table, especially if you wanted to persist the results or centralize them into a single remote database. From v18.1, a private temporary table could be used instead. A pl/ sql array or even simple pl/sql loop would also work. I found that ordered_predicates and with clause materialize hints did not fix performance in my testing.

Other columns in v$diag_alert_ext might be useful to output also, such as component_id and module_id.

Be aware v$diag_alert_ext is undocumented and hence unsupported, it relies on automatic diagnostic repository (ADR) working correctly, which might not always be the case especially in upgraded databases with non-standard directory locations or symbolic links – so worth checking there is data in v$diag_alert_ext before relying on this. Parsing alert log file on OS is likely a safer option, using old style cron scripts or newer filebeat/logstash for visualizations. Obtaining the data from the central OEM repository tables is another alternative.

August 23, 2018
Posted in Scripts

  • RobKsays:

    27 August 2018 at 15:09

    Hello!

    A had similar problem with our monitoring using v$diag_alert_ext.
    Reading your article and testing your advice I realized that select on v$diag_alert_ext is fast when you have a simple filter like originating_timestamp > SYSTIMESTAMP – 1

    It becomes slow when adding more complex criteria such as: originating_timestamp > systimestamp-301/(24*60*60)
    or and message_text not like ‘%(ORA-3136)%’.

    So the The easiest solution is to use an inline view and making sure no merging or predicate pushing is taking place.

    With the with clause and a hint a real-life monitoring query looks like this:

    with oneday as (select /*+ materialize */ * from v$diag_alert_ext where ORIGINATING_TIMESTAMP>systimestamp-1)
    select to_char(ORIGINATING_TIMESTAMP,’YYYY-MM-DD HH24:MI:SS’) || ‘ ‘ | | message_text from oneday
    where ORIGINATING_TIMESTAMP>systimestamp-301/(24*60*60) and message_text like ‘%ORA-%’ and message_text not like ‘%result of ORA-609%’ and message_text not like ‘%result of ORA-28%’ and message_text not like ‘%(ORA-3136)%’ and message_text not like ‘%ORA-01013:%’;

    This is fast and quick, no object needs to be created.
    I think it is an improved version of your original idea.

    Thanks for the hint. I’ve been struggling with this for quite some time.
    Cheers,
    Robert

This article is from the internet and does not represent1024programmerPosition, please indicate the source when reprinting:https://www.1024programmer.com/17206

author: admin

Previous article
Next article

Leave a Reply

Your email address will not be published. Required fields are marked *

Contact Us

Contact us

181-3619-1160

Online consultation: QQ交谈

E-mail: [email protected]

Working hours: Monday to Friday, 9:00-17:30, holidays off

Follow wechat
Scan wechat and follow us

Scan wechat and follow us

Follow Weibo
Back to top
首页
微信
电话
搜索