System Monitors

@Ilia
I have set up system monitors and after a sharp learning curve I have ended up with this

is it possible to sort the cyan arrowed column on monitor name, they appeared sorted in the module but not on the dashboard.
The 2 green arrowed columns are difficult to read as you have to count the hosts across to work out which one has failed. Could you just make these columns just 1 column and use a format like
<hostname> - <status> for as many hosts you are monitoring and also turning the hostname text into a link (the same as the link in webmin->webmin->Webmin Servers Index->registered server) meaning click on the arrowed link

so you goto the failed server in one click rather than opening a new tab etc, to get there. I understand that this may not be suitable for webmin systems that don’t manage remote webmin installations, but for server empire owners such as @Stegan this may make their life a bit easier but may be not :slight_smile:
another small point the Monitor history shows values ( free memory & disk space) on a host that is in failure could the value be added after the warning icon, as it, in some cases, be a false negative and the value will either prove or disprove that

Hello,

Thanks for the heads up!

I agree, it’s sub-optimal and would require some rework. How does it look for you in the Framed theme though?

@Jamie, what are your thoughts on it?

Also, can you PM me the backup of the monitors configs from the /etc/webmin directory, so I can have it listed the way you do?

I like this suggestion of grouping the hosts and statuses. Is this controlled by the theme, or the API that returns sections for the dashboard?

Not so good I would say

and still not sorted etc

That’s the module’s API, and the same displayed in Tools ⇾ System and Server Status page.

Thanks for the monitors, I see it too now.

if you are going to do some work on system monitors you can get an impossible situation


If you select more than one host here you can not enter the relevant information for each individual host, so if you select multiple hosts system monitor only queries the url or file contained in the 2 arrowed fields

Okay, I have fixed all mentioned display related issue in both the module and the Authentic theme. It will also be displayed better in Framed theme.

Do you think you could try Webmin rolling build (for testing purposes) to see how it goes for you?

this is better authentic

framed

I am not the person to look at very bright screens (I have an eye condition) so I will always favor a dark screen but on framed there does not appear to much contrast between the tick and the background.
displaying each host on a seperate line does increase the screen real estate though I had to remove the top charts (authentic) to make the screen display a bit smaller, but I do have the charts (if I need them) in the right hand slider

mind on one of the remote hosts I now get this error

Port number above 0xFFFF, will be truncated to 49163 for Socket::pack_sockaddr_in at /usr/share/webmin/fastrpc.cgi line 352.
Port number above 0xFFFF, will be truncated to 49162 for Socket::pack_sockaddr_in at /usr/share/webmin/fastrpc.cgi line 352

repeated so many times it consumed 3.8gb of disk space in 4 hours, so not 100% related to this thread, but is there a fix for that ?

It seems that something is flooding the connection!

@Jamie, any idea what that might be?

That’s odd. It doesn’t look that greenish to for me, e.g.:

You didn’t force reload the page! It should look better!

I don’t know what is causing this, although I think it’s a symptom of too many local ports being opened! I will fix that warning message in the next release though.

I know you already did. I also added a patch today for that code to log the port number.

However, what I mean is, what is actually causing it?

Not sure! But it would be useful to see the output from lsof -i tcp on a remote system that’s seeing this error.

I did and there is no change

Ok I’ll switch it all back on and wait for the error and post the data up

I was hoping for something more like this


which is what I mentioned earlier on

It turned out, for whatever reason, that the errors came from each server checking itself & the others in the servers index. Changing the poll times on each server to something different from the default 5 minutes solved it, so server A 5 mins, Server B 6 mins Server C 7 mins etc. etc. I’m guessing it flooded as everyone was hitting everyone else at around the same time