ScreenConnect High CPU Usage
-
@scottalanmiller said in ScreenConnect High CPU Usage:
@Dashrender said in ScreenConnect High CPU Usage:
@scottalanmiller said in ScreenConnect High CPU Usage:
@Dashrender said in ScreenConnect High CPU Usage:
OIC. What I don't know is - is my SC session running on that same VM, or it's own? It's not suppose to be part of the main NTG group of SC users.
You have your own SC system, this is ours that I'm looking at. This isnt a thread about you
Good to have that confirmed, but it does seem like a semi related problem - they are both having performance issues.
I don't believe that yours was updated, though. This appears to be an issue with the update.
Everything on the Internet is having performance problems right now, so that the two share that during a national Internet outage isn't too telling.
Is the slowness coming from latency introduced by the attacks? Is that why you say they could be related?
-
@Dashrender said in ScreenConnect High CPU Usage:
Is the slowness coming from latency introduced by the attacks? Is that why you say they could be related?
That's our guess. No other changes, nothing visible on the system.
The system here had updates run, twice, and an obvious and immediate system impact after the updates. And looking at the reports, the updates were immediately followed by massive disk activity. So the guess is that the system is running a database compression process or something and that that is using some massive amount of disk IO.
-
And loads of disk IO leads to IOWaits.
-
Since @Dashrender brought it up, for comparison this is his ScreenConnect instance during the same window:
07:20:01 AM CPU %user %nice %system %iowait %steal %idle 07:30:01 AM all 0.68 0.00 0.16 0.03 0.08 99.06 07:40:01 AM all 0.36 0.00 0.14 0.03 0.06 99.41 07:50:01 AM all 0.60 0.00 0.20 0.04 0.09 99.08 08:00:01 AM all 0.44 0.00 0.18 0.03 0.07 99.29 08:10:01 AM all 0.80 0.00 0.23 0.03 0.09 98.85 08:20:01 AM all 0.47 0.00 0.15 0.04 0.07 99.27 08:30:01 AM all 0.74 0.00 0.22 0.04 0.10 98.90 08:40:01 AM all 1.21 0.00 0.21 0.04 0.11 98.44 08:50:01 AM all 2.02 0.00 0.30 0.05 0.23 97.40 09:00:01 AM all 0.70 0.00 0.17 0.04 0.10 98.99 09:10:01 AM all 1.31 0.00 0.34 0.06 0.12 98.18 09:20:01 AM all 1.30 0.00 0.21 0.04 0.11 98.33 09:30:02 AM all 2.37 0.00 0.38 0.06 0.27 96.92 09:40:01 AM all 1.16 0.00 0.24 0.03 0.15 98.41 09:50:01 AM all 1.11 0.00 0.22 0.04 0.12 98.51 10:00:01 AM all 0.67 0.00 0.19 0.03 0.08 99.04 10:10:02 AM all 1.29 0.00 0.30 0.05 0.08 98.29 10:20:01 AM all 0.66 0.00 0.17 0.03 0.06 99.08 10:30:01 AM all 1.57 0.00 0.57 0.05 0.13 97.68 10:40:01 AM all 1.12 0.00 0.57 0.05 0.13 98.13 10:50:01 AM all 1.48 0.00 0.58 0.07 0.16 97.72 11:00:01 AM all 1.00 0.00 0.34 0.04 0.11 98.51 11:10:01 AM all 1.25 0.00 0.30 0.05 0.10 98.31 11:20:01 AM all 0.88 0.00 0.20 0.04 0.08 98.80 11:30:01 AM all 1.10 0.00 0.19 0.04 0.10 98.57 11:40:01 AM all 0.70 0.00 0.17 0.04 0.10 99.00 11:50:01 AM all 1.04 0.00 0.24 0.06 0.11 98.55 12:00:01 PM all 0.70 0.00 0.20 0.04 0.09 98.98 Average: all 0.68 0.01 0.21 0.03 0.07 99.00
And here are the disks:
07:20:01 AM tps rtps wtps bread/s bwrtn/s 07:30:01 AM 1.01 0.00 1.01 0.00 22.10 07:40:01 AM 0.75 0.00 0.75 0.00 13.15 07:50:01 AM 1.06 0.00 1.06 0.00 22.80 08:00:01 AM 0.84 0.00 0.84 0.00 15.65 08:10:01 AM 1.32 0.00 1.32 0.00 25.53 08:20:01 AM 0.86 0.00 0.86 0.00 16.36 08:30:01 AM 1.11 0.00 1.11 0.00 25.73 08:40:01 AM 1.09 0.01 1.08 0.08 22.73 08:50:01 AM 1.18 0.00 1.18 0.00 28.18 09:00:01 AM 1.14 0.00 1.14 0.03 20.68 09:10:01 AM 1.66 0.31 1.35 13.68 31.35 09:20:01 AM 0.99 0.01 0.98 0.05 20.73 09:30:02 AM 1.22 0.01 1.21 0.55 28.42 09:40:01 AM 0.93 0.00 0.93 0.03 19.95 09:50:01 AM 1.43 0.00 1.43 0.00 31.65 10:00:01 AM 0.96 0.00 0.96 0.00 20.60 10:10:02 AM 1.42 0.16 1.26 6.77 30.39 10:20:01 AM 1.25 0.09 1.16 4.04 22.91 10:30:01 AM 1.80 0.00 1.79 0.09 47.44 10:40:01 AM 1.64 0.00 1.64 0.00 39.45 10:50:01 AM 1.77 0.00 1.77 0.00 46.79 11:00:01 AM 1.38 0.00 1.38 0.00 30.81 11:10:01 AM 1.32 0.06 1.26 10.12 29.75 11:20:01 AM 1.05 0.00 1.05 0.00 22.53 11:30:01 AM 1.17 0.00 1.17 0.00 28.66 11:40:01 AM 1.13 0.00 1.13 0.00 22.02 11:50:01 AM 1.19 0.00 1.19 0.00 28.63 12:00:01 PM 1.00 0.00 1.00 0.00 20.82 Average: 1.05 0.06 0.99 5.29 20.66
As you can see, totally different performance. But same OS, host, VM configuration, etc.
-
Both systems are on a similar system update and reboot schedule, only hours off from each other. Both are at identical patches right now:
# cat /etc/redhat-release CentOS Linux release 7.2.1511 (Core)
-
@scottalanmiller said in ScreenConnect High CPU Usage:
I'm an idiot, should have grabbed the block device report straight away. Here is the disk activity:
09:46:39 AM LINUX RESTART 09:50:02 AM tps rtps wtps bread/s bwrtn/s 10:00:01 AM 10.39 0.05 10.34 0.73 297.16 10:10:07 AM 95.03 85.88 9.15 16396.11 276.51 10:20:01 AM 155.68 146.07 9.61 28524.76 260.43 10:30:02 AM 2.29 0.07 2.22 1.03 50.41 10:40:01 AM 5.55 0.00 5.55 0.00 162.94 10:50:01 AM 2.00 0.04 1.96 7.22 42.68 11:00:01 AM 5.77 0.00 5.77 0.00 158.67 11:10:01 AM 3.17 1.45 1.72 19.73 41.24 11:20:01 AM 6.14 0.03 6.11 0.38 167.57 11:30:02 AM 25.66 24.06 1.61 1020.60 48.93 11:40:01 AM 16.00 5.80 10.20 350.94 265.18 11:50:01 AM 1.43 0.07 1.36 20.54 31.69 12:00:01 PM 6.53 1.02 5.51 18.00 150.42 Average: 24.83 19.44 5.39 3387.89 147.87
That's some crazy load even for a RAID 10 SSD array. No wonder it is slowing down. Something major is going to disk.
what command is this? or is this from the logs?
-
@Dashrender said in ScreenConnect High CPU Usage:
@scottalanmiller said in ScreenConnect High CPU Usage:
I'm an idiot, should have grabbed the block device report straight away. Here is the disk activity:
09:46:39 AM LINUX RESTART 09:50:02 AM tps rtps wtps bread/s bwrtn/s 10:00:01 AM 10.39 0.05 10.34 0.73 297.16 10:10:07 AM 95.03 85.88 9.15 16396.11 276.51 10:20:01 AM 155.68 146.07 9.61 28524.76 260.43 10:30:02 AM 2.29 0.07 2.22 1.03 50.41 10:40:01 AM 5.55 0.00 5.55 0.00 162.94 10:50:01 AM 2.00 0.04 1.96 7.22 42.68 11:00:01 AM 5.77 0.00 5.77 0.00 158.67 11:10:01 AM 3.17 1.45 1.72 19.73 41.24 11:20:01 AM 6.14 0.03 6.11 0.38 167.57 11:30:02 AM 25.66 24.06 1.61 1020.60 48.93 11:40:01 AM 16.00 5.80 10.20 350.94 265.18 11:50:01 AM 1.43 0.07 1.36 20.54 31.69 12:00:01 PM 6.53 1.02 5.51 18.00 150.42 Average: 24.83 19.44 5.39 3387.89 147.87
That's some crazy load even for a RAID 10 SSD array. No wonder it is slowing down. Something major is going to disk.
what command is this? or is this from the logs?
sar -b
-
Unless specifically requested, I have no reason to sign into your (or any other) SC system.
I have been aware of an update, but have held off on it. I updated NTG SC to attempt to correct an issue unrelated to @Dashrender previously stated issue.
-
@gjacobse said in ScreenConnect High CPU Usage:
Unless specifically requested, I have no reason to sign into your (or any other) SC system.
I have been aware of an update, but have held off on it. I updated NTG SC to attempt to correct an issue unrelated to @Dashrender previously stated issue.
Looks like the latest update has some major baggage with it. Needs a good six hour window in which to deploy due to the IOPS that it needs. So far, the system is looking pretty good now that the process has cooled down.
-
@gjacobse have you seen the performance issues from the end user perspective having improved?
-
@gjacobse said in ScreenConnect High CPU Usage:
Unless specifically requested, I have no reason to sign into your (or any other) SC system.
I have been aware of an update, but have held off on it. I updated NTG SC to attempt to correct an issue unrelated to @Dashrender previously stated issue.
I wasn't implying or otherwise saying you were. Only answering Scott's question about seeing people in that session. Alls good!
As for my slowness issue - if you guys are seeing remote access performance issues like I am, than Scott is probably right, that the ongoing attacks are possibly the cause, and there's nothing to do to fix that.
As for the update - any reason not to apply it to my system?
-
@Dashrender said in ScreenConnect High CPU Usage:
As for the update - any reason not to apply it to my system?
Yes, huge system impact immediately after deploying To the point that it appears to offline the system for twenty minutes from disk impact! It was enough that we were getting gateway timeouts for a few minutes.
-
This is the first test of the latest update. Customers don't normally get the updates until we've had time to test them. This one was patched early for some reason I am no aware of. But the patch is being tested now.
-
Cool, I'm in no hurry - Gene mentioned there was one when I mentioned there was performance issues last week.
-
Just happened to consider the versions - The current stable version was released on 9/28/2016
ScreenConnect_6.0.11622.6115_Release.tar.gz Stable 9/28/2016 37 MB Linux
-
@gjacobse said in ScreenConnect High CPU Usage:
Just happened to consider the versions - The current stable version was released on 9/28/2016
ScreenConnect_6.0.11622.6115_Release.tar.gz Stable 9/28/2016 37 MB Linux
I upgraded to this version last week Friday before the wedding.
I had no issues, but it is still running on a Windows Server VM. I keep meaning to migrate, but never remember when I have a block of time for internal work.
-
How is it working at this point?
-
@scottalanmiller said in ScreenConnect High CPU Usage:
How is it working at this point?
You asking me or Gene? My system has been working great. Though my upgrade was only from the previous version. Single step.
-
@JaredBusch said in ScreenConnect High CPU Usage:
@scottalanmiller said in ScreenConnect High CPU Usage:
How is it working at this point?
You asking me or Gene? My system has been working great. Though my upgrade was only from the previous version. Single step.
I do not recall our previous version,.. but it should have been only one version different as well. Unsure if that detail is in the log files.
-
@JaredBusch said in ScreenConnect High CPU Usage:
@scottalanmiller said in ScreenConnect High CPU Usage:
How is it working at this point?
You asking me or Gene? My system has been working great. Though my upgrade was only from the previous version. Single step.
Sorry, Gene.