This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

DMZ based Messae Relay to collect status from rogue computers

Dear All,

 

Would there be a way (or even a good idea) for travelling users to setup a message relay in a DMZ, that would be available for machines that are taken off the network for extended amount of time (say for a remote project) and still be able to report on their status back to the management server?

 

I am guessing that 8193 and 8194 would be the two ports, and that only authenticated (proper certificates) would need to be present (e.g. a client would have already connected to the console while on the network prior to leaving the network).

 

Other question is, how would I swap out machines in this pool, e.g. 5 machines 'came home' they are back on the network, while other 7 users will be away from next week. Is that a constant reconfiguration of the client's messaging setup back to the management server every time?

 

Many thanks,

DanZi



This thread was automatically locked due to age.
Parents
  • Hello DanZi,

    [have to mention a message from the sponsor: With Central you can manage endpoints regardless of their whereabouts]

    there's no need to manually pool the endpoints unless you want to assign different policies to endpoints on the road. The ports are, BTW, 8192 and 8194.
    But first things first: Where do they update from? Definitely not the standard UNC path.

    Christian

  • Hi Christian,

     

    Internally they would update from a UNC share or better: an http shared cid. But since the list of machines that are on the road is constantly changes, I don't want all of them talk via the same msg relay... or is that not an issue?

     

    Thanks a lot

     

    D.

Reply
  • Hi Christian,

     

    Internally they would update from a UNC share or better: an http shared cid. But since the list of machines that are on the road is constantly changes, I don't want all of them talk via the same msg relay... or is that not an issue?

     

    Thanks a lot

     

    D.

Children
  • Hello DanZi,

    hm, I'm a little bit confused. Internally UNC and externally WebCID, or generally a WebCID? If so, one with a public name/IP I assume.
    And a relay (assuming it's a server) can handle the same number of endpoints as the management server.

    There's more than one way to skin a cat.
    I see there's already an answer by a pro. The article doesn't address the constant inside-outside changes though. With this basic method you configure the potentially roaming endpoints to update from a CID with the MR configuration. All these endpoints would use the MR regardless of their in/out residence. Not really a problem, depends on the DMZ's segregation from the internal network (i.e. whether network/security/firewall agrees to generally open 8192/8194 IN↔DMZ).

    As Stephen Higgins has mentioned, FQDNs come into play and split DNS. Variations are possible depending on whether you internally (also) use routable addresses.

    RMS reconfiguration should do no harm, it's thinkable to use UNC for the Primary update location and the WebCID as Secondary for all endpoints (although it will induce some latency for updates from outside) where the WebCID has the relay configuration. In case internal endpoints fail to update from Primary (happens but should be rare) they will switch to the relay but fall back after the next successful "internal" update.

    Christian