Join us at the MP University next Wednesday

Join us at the MP University next Wednesday

Join OpsLogix at the MP University with Silect, Microsoft & more

Silect along with OpsLogix, Microsoft and other industry-leading partners are proud to present MP University. Join us for this free 1 day online session to learn about SCOM, Management Packs, Azure and much more.

This event is being held in Central European Time (CET) November 21 9AM – 4PM.

If you are unable to attend on Nov. 21 or if the time is inconvenient, a live rebroadcast will be held Wednesday, Nov. 28 from 9AM4PM CET. Register for MP University and you will be notified of the rebroadcast.

OpsLogix session: Cookdown your Management Pack in SCOM

This session will not only show why using cookdown in Management Packs will help dramatically reduce resource usage in SCOM, but also how to implement cookdown in your own Management Packs. An example will also be given how cookdown is implemented in the OpsLogix VMware Management Pack.

Click here to reserve your spot!

OpsLogix is sponsoring SCOM-Day 2018

OpsLogix is sponsoring SCOM-Day 2018

When?

Wednesday 10th of October 2018, Visual Arena, Lindholmen, Gothenburg

This year’s theme

Hybrid Monitoring

As more and more people buy cloud services, we have focused on how to monitor hybrid environments with products such as SCOM and Azure prior to this year’s event. Therefore, we have invited Thomas Maurer (MVP) and Marcel Zehner (MVP), two of Microsoft’s “Most Valuable Professionals” and Martin Ehrnst (Intility), representing a major host retailer. All three will be present as a speaker will be available for you to share their experiences during the event.

For those who have not been able to participate in the SCOM-Day event of the previous year, now have the opportunity to join! It’s the ideal forum for anyone working in IT Operations at all levels. The idea is to network and get tips from colleagues in the industry, to work together towards a more modern and more efficient, innovative IT operation. In the previous years, organizations from all sorts of industries have participated and we hope that you will also appear this year.

The event is free of charge and the day is packed with valuable tips from well-known speakers in the area, along with the latest and hottest news from vendors.

Following feedback from our participants in previous years, we have chosen to cut the number of sponsors by half for the 2018 event and invite external speakers. We constantly want to improve the quality of the event to provide you with the best possible experience.

Agenda

0830-0900 Registration
0900-0905 Welcome
0905-0945 Thomas Maurer (MVP)
What is Azure Stack, how does it work and how do you monitor it?
0945-1015 Coffee
1015-1045 Opslogix
The future of VMware monitoring (on-prem, hybrid and cloud)
1045-1100 Leg stretcher
1100-1130 HYCU
More info will be available soon.
1130-1200 NiCE
Monitoring Office 365 based on your perspective.
1200-1300 Lunch & Quiz
1300-1330 Silect
Silect Portal for SCOM – Silect will demonstrate a new SCOM web portal that allows users to interactively view and share information about what is being monitored in their environment.
1330-1400 Approved
News in Operations Manager 1807 and Azure Management.
1400-1430 Coffee & Fika
1430-1515 Martin Ehrnst (Intility)
Experience from a host supplier to supervise a hybrid environment with SCOM and Azure.
1515-1530 Leg stretcher
1530-1615 Marcel Zehner (MVP)
Monitor and manage a Tesla with PowerShell, SCOM, OMS and PowerBI.
1615-1645 Closing

The event is primarily for end users and in order to ensure the quality, we reserve the right to refuse access to persons outside the target group. The number of seats is limited and the interest is large (max 3 from each organization).

Sponsors

 

Would you like to attend SCOM-Day 2018? You can register here!

 

Why Are Less Than 1% Of Critical Alerts Investigated?

Why Are Less Than 1% Of Critical Alerts Investigated?

Many organizations seem to be suffering from alert fatigue. In a recent EMA report, according to Infosecurity, 80% of organizations that receive 500 or more severe/critical alerts per day, happen to investigate less than 1% of them. A shocking number to say the least! But what are the obstacles organizations are facing that allows such neglect?

From the EMA report, we can conclude that organizations face four major issues when it comes down to their ability to tackle these severe/ critical alerts.

 

 

Issues Organizations Face

Alert Volume

Recent surveys from the EMA report indicate that 92% of organizations receive up to 500 alerts a day. From all the organizations that took part in the survey, 88% said they receive up to 500 “critical” or “severe” alerts per day. Yet, 93% of those respondents would rate their endpoint prevention program as “competent”, “strong”, or even as “very strong”. So there either seems to be a big gap between perception and reality or alerts that are considered to be “severe” or “critical” should not be categorized as such. Either way alert management does not seem to be representative.

Capacity

Even if organizations have detection systems in place that create massive alert volumes, what they often lack is human resources to manage the alerts. Organizations are clearly dealing with a large capacity gap. Of the surveyed organizations that receive 500 to 900 severe/critical alerts per day, 60% have only 3-5 FTE’s working on the alerts.

On top of that, 67% of those surveyed indicate that only 10 or fewer sever/critical alerts are investigated per day and 87% of the participants told that their teams have the capacity to only investigate 25 or fewer severe/critical events per day. For most of the participants the alert volumes are high, however, the resources at their disposal are critically low. As a result, less than 1% of the incidents end up being investigated.

Priority

The research assumes a need for prioritization and classification into severe/critical buckets, which is understandable given the traditional, manual approach to Incident Response.

“In truth, any prioritization is a compromise, and the act of classifying by priority is merely a justification to ignore alerts.”

However, in doing so, the numbers are even worse and new questions arise. If less than 1% of severe/critical alerts are ever investigated, what percent of all alerts are investigated? What percentage of alerts are incorrectly categorized and how many alerts are classified as benign and ignored completely, yet warrant follow-up?

In truth, any prioritization is a compromise, and the act of classifying by priority is merely a justification to ignore alerts.

Incident Response

The three prior problems seem to indicate a substandard, broken incident response process. If there are too many alerts to investigate, but not nearly enough people to follow-up and the need to classify all alerts is maintained. All of this just to be able to act on less than 1% of the total number of alerts. However, 92% of respondents indicated that their Incident Response programs for endpoint incidents were “competent” or better.

The only way this makes sense is if respondents felt that when their Incident Response teams were finally able to actually take action on the small percentage of alerts that get to this point and they were successful in addressing the issue.

 

 

Conclusions

  • Detailed analysis showed that in aggregate 80% of the organizations were only able to investigate 11 to 25 events per day, leaving them a huge, and frankly insurmountable, daily gap.
  • Either due to a lack of tools to collect data or a lack of tools with the ability to analyze data, this issue is created by a lack of high-fidelity security information.
  • Information isn’t the problem. This and similar surveys show the depth and breadth of the problem facing cybersecurity teams today. However, simply gathering more information to hand off to analysts isn’t the answer.

 

 

The Solution

Automation is a key aspect of creating an effective and mature security program. It improves productivity and, given the lack of staff and the abundance of incidents in most organizations, automation should be a priority in the evolution of prevention and detection.

“Automation is the answer!”

When asked about automation of tasks such as data capture and/or analysis as they related to prevention, detection, and response for both network and endpoint security programs, 85% of the respondents said it was either important or very important.

Thus the only viable approach to the increase in alerts and scarcity of capacity is to use security orchestration and automation tools to:

  • Automatically investigate every alert as an alternative to prioritizing alerts to match capacity, use a solution to investigate every alert.
  • Gather additional context from other systems by automating the collection of contextual information from other network detection systems, logs, etc.
  • Exonerate or incriminate threats by using both known threat information and by inspection, decide whether what was detected is benign or malicious.
  • Automate the remediation process, once a verdict has been made, automatically remediate (quarantine a file, kill a process, shut down a CNC connection, etc.).

While we’re biased, this approach is the only way.

Hexadite, the only agentless intelligent security orchestration and automation platform for Global 2000 companies also states that automation is the only real answer by saying “it is impossible for organizations to hire enough people to create an adequate context for the data – and thus provide high fidelity security information.”

 

 

 

References

  • “Less Than 1% of Severe/Critical Security Alerts Are Ever Investigated” By Tara Seals for InfoSecurityMagazine.com, Retrieved April 8, 2018.
  • “White Paper: EMA Report Summary: Achieving High-Fidelity Security” EMA Research, Retrieved April 8, 2018.

 

Free Operations Manager Performance Monitoring Management Pack

Free Operations Manager Performance Monitoring Management Pack

A guest blog by Jonas Lenntun from Approved Sweden.

Prior to upcoming upgrades or a new installation of System Center Operations Manager (SCOM), it’s important to keep statistics on how the performance of your environment is impacted over time. It can also be the case that you want to make changes to the infrastructure and you’d want to understand if you get a positive or negative impact.

For this purpose, Approved has developed a free Management Pack for SCOM 2012 R2 and SCOM 2016 that simulates a number of user calls using PowerShell commands – as they communicate with SCOM in a similar way as the console through SDK.

PowerShell may not entirely be the right solution for simulations of the console, but at least it gives us an anchor point as to whether performance improves or deteriorates over time.

This also gives you, as a platform administrator, an opportunity to compare your values to other environments, in order to get an idea if your environment is working well or is suffering from performance issues.

Operations Manager Performance Monitoring

The Management Pack consists of four different types of rules that run at different intervals, against two different classes for data collection. All rules are disabled on the import, so selected rules need to be activated after the fact.

Internally at Approved, we use the Management Pack in all our projects to compare the current performance with previous versions of SCOM, but we primarily use it to ensure the quality of the projects we work in, as we continually add more agents or management packs.

Views

In the Management Pack, you’ll find two different views under the “Operations Manager Performance” folder after the management package has been imported into SCOM.

  • Events
  • SDK Performance

Events

The rule that collects Event ID 21025 from the Operations Manager log is used to identify if there are abnormal changes in the environment that force SCOM to process new configuration files. The rule is called “OpsMgr Connector Event New Config Rule” and has “Root Management Server Emulator” as a target.

In earlier environments, a phenomenon called “Config Churns” could be encountered, and it was a major problem when you only had a “Root Management Server” that handled this kind of calculations. In later versions of SCOM, this load can be divided by several servers, but you do not want to change your environment too much, especially during daytime when you have operators who may work in the console. 

More information about this can be found here.

We analyze the trends of this event with the help of our IT Service Analytics tool. This gives us a whole new dimension to work with this kind of data and to help us identify patterns such as time of day or weekdays. 

SDK Performance

To get a good picture of how performance is affected over time, we have three different rules available.

Measure SDK Client Connections (total). Collection of a number of console connections on each management server. This differs from the built-in as it counts the total number of processes that are connected in one single rule. This rule has “Management Server” as a target and runs as a default every 15 minutes.
 
Count objects returned by Get- (command). Collection of a number of items returned on each execution. As an example, Get-SCOMAgent returns the number of agents installed in the environment and stores the result in each operations manager database. This rule has the “Root Management Server” as a target and runs as a default once a day.
 
 
Measure Get- (Command) Execution Time (s). Measures the time it takes for the script to run and returns this value in seconds. This rule has “Management Server” as a target and runs as a default every 15 minutes.
 
The above values can be compared in retrospect to see if changes made in the environment get a percentage improvement or deterioration and you can also see if it is due to too many console users against a specific server or too many changes that might occur in the environment.
 
The PowerShell commands, which are currently in use, are as follows:
  • Get-SCOMAgent
  • Get-SCOMClass
  • Get-SCOMGroup
  • Get-SCOMManagementPack
  • Get-SCOMMonitor
  • Get-SCOMMonitoringObject
  • Get-SCOMOverride
  • Get-SCOMRule
  • Get-SCOMDiscovery
  • Get-SCOMEvent

Please be careful before activating the rules and be sure that they do not affect your environment negatively. If you have not trimmed it well and, for example, collect a lot of events, the Get-SCOMEvents rule can result in long response times that may adversely affect performance. All scripts have a 120-minute timeout, which can be noticed in the error logs when changes occur in the environment that prevent the script from being executed. This is unfortunately quite normal and this is how SCOM works.

Follow-up of performance

To provide a good overview of performance, we can easily run a performance report through IT Service Analytics that shows an overview of the collected data.
 
To have a more detailed view of specific dates, click on the desired day to see more details on selected counters +/- 1 day.
 

Free download

Our Management Pack is downloading below and all use is at your own risk and should be tested before it is introduced into production. Fill in the contact form below to access the download file.

Your Name (required)

Your Company Email - (required)

Your Company Name (required)

Country

Your Phone Number (required)

Your SCOM Version

How Did You Find Us? (required)

By using this form you agree with the storage and handling of your data by this website.

20% Discount On Our Capacity Reports Management Pack

20% Discount On Our Capacity Reports Management Pack

IT’S REPORTING SEASON!

Since it’s reporting season again, we’re offering a 20% discount, valid until March 15, 2018 on our Capacity Reports Management Pack!

Our Capcity Management Pack accesses the OpsMgr data warehouse and forecasts a scenario of a set of selected objects based on usage.

All OpsLogix products are native to Operations Manager 2012 & 2016 and fully integrate into the System Center IT infrastructure.

BUY IT TODAY

You can’t access the UNIX/Linux computers view in the Administration pane in Microsoft System Center 2012 R2 Operations Manager?

You can’t access the UNIX/Linux computers view in the Administration pane in Microsoft System Center 2012 R2 Operations Manager?

If you can’t access the UNIX/Linux computers view in the Administration pane in Microsoft System Center 2012 R2 Operations Manager, then you probably receive the following error message:

 

Date: 12/30/2017 7:48:49 PM Application: Operations Manager Application Version: 7.1.10226.1360 Severity: Error Message: System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.SystemCenter.CrossPlatform.UI.OM.Integration.UnixComputerOperatingSystemHelper.JoinCollections(IEnumerable`1 managementServers, IEnumerable`1 resourcePools, IEnumerable`1 unixcomputers, IEnumerable`1 operatingSystems) at Microsoft.SystemCenter.CrossPlatform.UI.OM.Integration.UnixComputerOperatingSystemHelper.GetUnixComputerOperatingSystemInstances(String criteria) at Microsoft.SystemCenter.CrossPlatform.UI.OM.Integration.Administration.UnixAgentQuery.DoQuery(String criteria) at Microsoft.EnterpriseManagement.Mom.Internal.UI.Cache.Query`1.DoQuery(String criteria, Nullable`1 lastModified) at Microsoft.EnterpriseManagement.Mom.Internal.UI.Cache.Query`1.FullUpdateQuery(CacheSession session, IndexTable& indexTable, Boolean forceUpdate, DateTime queryTime) at Microsoft.EnterpriseManagement.Mom.Internal.UI.Cache.Query`1.InternalSyncQuery(CacheSession session, IndexTable indexTable, UpdateReason reason, UpdateType updateType) at Microsoft.EnterpriseManagement.Mom.Internal.UI.Cache.Query`1.InternalQuery(CacheSession session, UpdateReason reason) at Microsoft.EnterpriseManagement.Mom.Internal.UI.Cache.Query`1.TryDoQuery(UpdateReason reason, CacheSession session) at Microsoft.EnterpriseManagement.Mom.Internal.UI.Console.ConsoleJobExceptionHandler.ExecuteJob(IComponent component, EventHandler`1 job, Object sender, ConsoleJobEventArgs args)

 

Cause

The issue occurs if the UNIX/Linux monitoring resource pool is deleted

How to solve it!

To resolve the issue, follow these steps:

  1. Create a resource pool for UNIX/Linux monitoring. Give the new pool a different name than the name of the deleted resource pool.
  2. Add the management servers that perform UNIX/Linux monitoring to the new resource pool.
  3. Configure the UNIX/Linux Run As accounts to be distributed by the new resource pool. To do this, follow these steps:
    • In the Operations console, go to Administration Run As Configuration > UNIX/Linux Accounts.
    • For each account, follow these steps:
      – Right-click the account, and then select Properties.
      –  On the Distribution Security page of the UNIX/Linux Run As Accounts Wizard, select More Secure.
      –  In Selected computers and resource pools, select Add.
      –  Select Search by resource pool name, and then select Search.
      –  Select the new resource pool that is created in step 1, select Add, and then select OK.
  4. Run the following PowerShell cmdlet to retrieve the managed UNIX and Linux computers:
    Get-SCXAgent
  5. Verify that the agents that are associated with the deleted resource pool still exist and that the relationship remains.
  6. Run the following command to change the managing resource pool to the one that is created in step 1:

    $SCXPool = Get-SCOMResourcePool -DisplayName "<New Resource Pool Name>"
    Get-SCXAgent | Set-SCXResourcePool -ResourcePool $SCXPool

Original article.