Practical minimum default TTL for multi site Windows DNS

Practical minimum default TTL for multi site Windows DNS

Author
Discussion

Greenmantle

Original Poster:

1,405 posts

114 months

Tuesday 11th October 2022
quotequote all
Posting here but will probably have to reach out further afield.
So we have a smallish two site Windows Server Estate with Domain Controller and DNS Server in each site.
Currently The DNS Servers have a minimum TTL of 1 hour.
I have recently introduced Failover Clusters and I want to change the 1 hour value to 30 seconds.
All of this is Windows 2019 Server sitting on VMware 7.0 with fast Gen 10 Blades so nothing shoddy.
Can I do this?
I have seen virtually instantaneous failover in other sites and I believe that the DNS Servers minimum TTL is the culprit here.

duff-man

628 posts

212 months

Tuesday 11th October 2022
quotequote all
Why do you think it’s dns? What issues are you seeing?

Is this a multi-subnet cluster? What’s the service/app you are trying to failover?

Edited by duff-man on Tuesday 11th October 23:59

theboss

7,092 posts

225 months

Wednesday 12th October 2022
quotequote all
I'm not sure exactly how you'd achieve it with Windows DNS i.e. if you change the minimum TTL will that affect most dynamic records or do Windows clients set a longer TTL for their own records when registering.

The result you probably want to achieve is for your static and dynamically registered records, Service locator records and so on, to persist with a 1hr+ TTL. But you do want to be able to support much shorter TTLs for failover cluster resources. You want the stability and persistence of the static stuff but the fast timeout of cluster resources to aid failover times.

I haven't worked much with Windows clustering in recent versions but with DNS based GTM/GSLB systems I'm used to seeing TTL of 5-60 seconds for resources that need to failover quickly.

somouk

1,425 posts

204 months

Wednesday 12th October 2022
quotequote all
duff-man said:
Why do you think it’s dns? What issues are you seeing?

Is this a multi-subnet cluster? What’s the service/app you are trying to failover?

Edited by duff-man on Tuesday 11th October 23:59
This, what's actually failing to fail over that makes you think the DNS TTL is the cuprit?

Greenmantle

Original Poster:

1,405 posts

114 months

Wednesday 12th October 2022
quotequote all
yes it is multi subnet with a simple backbone (MPLS) connecting the two sites.
the service is SQL Server Availability Group (not to be confused with WSFC which has single storage).

I think it is DNS because I am repeatedly doing "NSLOOKUP" against both DNS servers

So the DNS server servicing the cluster node that is failing immediately updates while the DNS server servicing the new active node will take at least 30 minutes.

I'm not sure what impact the SOA settings has on this particular DNS record since I set the record to only have a TTL of 30 seconds. This is based on the lowest possible recommendation. ie never set it to 0.

theboss

7,092 posts

225 months

Wednesday 12th October 2022
quotequote all
My understanding of SQL Server availability groups is that the clients are normally aware of them, DNS returns all of the associated IP addresses and the clients will cycle between them in the event of failures. Obviously that has a certain dependency on client software libraries, SQL Server drivers etc. supporting this.

This seems to sum it up pretty well.

https://techcommunity.microsoft.com/t5/sql-server-...

dmsims

6,749 posts

273 months

Wednesday 12th October 2022
quotequote all
This a client issue (not DNS)

geeks

9,549 posts

145 months

Wednesday 12th October 2022
quotequote all
theboss said:
My understanding of SQL Server availability groups is that the clients are normally aware of them, DNS returns all of the associated IP addresses and the clients will cycle between them in the event of failures. Obviously that has a certain dependency on client software libraries, SQL Server drivers etc. supporting this.

This seems to sum it up pretty well.

https://techcommunity.microsoft.com/t5/sql-server-...
This was my understanding too.

Also, just because, always remember the DNS haiku...




Greenmantle

Original Poster:

1,405 posts

114 months

Wednesday 12th October 2022
quotequote all
sorry guys it is definitely is something within DNS.
there are no clients in my testing just straight forward "NSLOOKUP" from a cmd.
the availability group fails over and all is good at the sql server level.
but DNS takes 30 minutes to switch the dynamic IP address even though the TTL is 30 seconds.
the DNS server for the old primary gets the update immediately but the propagation of that change to the other DNS server is 30 minutes. why?

theboss

7,092 posts

225 months

Wednesday 12th October 2022
quotequote all
If the zones are AD-integrated the records are just AD objects and you'll be looking at the usual inter-site replication delay

Check your sites and services setup.

In any case whilst you're not seeing DNS update 'quickly' enough, for a SQL AG setup DNS should return all availability replica IP's and your AG-aware SQL clients should know that they can attempt to use them in turn.

dmsims

6,749 posts

273 months

Wednesday 12th October 2022
quotequote all

Greenmantle

Original Poster:

1,405 posts

114 months

Wednesday 12th October 2022
quotequote all
theboss said:
If the zones are AD-integrated the records are just AD objects and you'll be looking at the usual inter-site replication delay

Check your sites and services setup.

In any case whilst you're not seeing DNS update 'quickly' enough, for a SQL AG setup DNS should return all availability replica IP's and your AG-aware SQL clients should know that they can attempt to use them in turn.
unfortunately that doesn't suit the application so I changed the RegisterAllProvidersIP value to 0

a quick update it looks like the problem is with just one side.

whether you are failing A to B or B to A - B updates instantly while A takes between 7-30 minutes.
the funny thing is A is the primary site in AD
Therefore B can only update As AD while A can update everyone including B and say others X, Y and Z.

Thinking this through maybe there is a delay in updating either X,Y or Z and A has to wait for that to complete before it can update itself. Obviously I could be talking rubbish.

duff-man

628 posts

212 months

Wednesday 12th October 2022
quotequote all
Greenmantle said:
sorry guys it is definitely is something within DNS.
there are no clients in my testing just straight forward "NSLOOKUP" from a cmd.
the availability group fails over and all is good at the sql server level.
but DNS takes 30 minutes to switch the dynamic IP address even though the TTL is 30 seconds.
the DNS server for the old primary gets the update immediately but the propagation of that change to the other DNS server is 30 minutes. why?
Please read the article that theboss has linked to. Your clients should be aware they are connecting to a Multi-Subnet failover cluster and be configured so. I wouldn’t go reducing the AD intra-site replication to try and fix this.

Here's an article describing your scenario
https://learnsqlserverhadr.com/much-ado-about-dns/

theboss

7,092 posts

225 months

Wednesday 12th October 2022
quotequote all
If you've already set RegisterAlllProvidersIPs to 0 because the client won't support multiple records being returned, then I believe you need to set HostRecordTTL to something much lower. It's described in both of the articles linked above.

I would go with something like 60s and confirm you get the expected result. You could drop lower. As first mentioned, I'm used to dealing with very short (often 5s) TTL for globally load balanced services.

Note it's the cluster service that will dynamically register and update the record during failovers, hence the TTL being defined by a parameter of the cluster resource.

For this reason you don't need to change the TTL directly on any DNS record nor the 'minimum (default) TTL' property on any zone's SOA.

As I mentioned you're almost certainly seeing the variance in propagation times because of AD replication, unless some of your DCs are set as DNS forwarders on others (which in most situations they shouldn't be, but that's another subject).

pistonheadforum

1,170 posts

127 months

Wednesday 12th October 2022
quotequote all
Changing the TTL implies that you are having issue with the failover responding quickly enough (getting the stale records). For me the bigger question would be why would you be wanting to tweak the failover issue rather than understand why the cluster is continually getting borked and needing to fail over othe other one quicker.

This seems like a fundamental rethink/planning of what you have and want to achieve might be a better place to focus as it might make things clearer and easier.

Greenmantle

Original Poster:

1,405 posts

114 months

Wednesday 12th October 2022
quotequote all
pistonheadforum said:
Changing the TTL implies that you are having issue with the failover responding quickly enough (getting the stale records). For me the bigger question would be why would you be wanting to tweak the failover issue rather than understand why the cluster is continually getting borked and needing to fail over othe other one quicker.

This seems like a fundamental rethink/planning of what you have and want to achieve might be a better place to focus as it might make things clearer and easier.
good question.
unfortunately this is the norm as promoted by Microsoft.
if you have worked with either on-premises clusters or even Azure IaaS clusters failover and failback happens quite regularly. Patching, service outages etc.
Instantaneous failover means that SLA uptimes of 99.99% are easily achievable.

have a look at some of the SLA's being offered on Azure SQL PaaS. Some of them are worse than 99.99% even with availability zones.

Greenmantle

Original Poster:

1,405 posts

114 months

Wednesday 12th October 2022
quotequote all
theboss said:
If you've already set RegisterAlllProvidersIPs to 0 because the client won't support multiple records being returned, then I believe you need to set HostRecordTTL to something much lower. It's described in both of the articles linked above.

I would go with something like 60s and confirm you get the expected result. You could drop lower. As first mentioned, I'm used to dealing with very short (often 5s) TTL for globally load balanced services.

Note it's the cluster service that will dynamically register and update the record during failovers, hence the TTL being defined by a parameter of the cluster resource.

For this reason you don't need to change the TTL directly on any DNS record nor the 'minimum (default) TTL' property on any zone's SOA.

As I mentioned you're almost certainly seeing the variance in propagation times because of AD replication, unless some of your DCs are set as DNS forwarders on others (which in most situations they shouldn't be, but that's another subject).
yes correct the HostRecordTTL is reduced to 120 seconds.
I am now thinking that the DNS is not the culprit and I have moved my analysis over to the Active Directory Replication especially site to site costing and topology where there are bridgeheads. Will keep you posted.

theboss

7,092 posts

225 months

Wednesday 12th October 2022
quotequote all
Greenmantle said:
theboss said:
If you've already set RegisterAlllProvidersIPs to 0 because the client won't support multiple records being returned, then I believe you need to set HostRecordTTL to something much lower. It's described in both of the articles linked above.

I would go with something like 60s and confirm you get the expected result. You could drop lower. As first mentioned, I'm used to dealing with very short (often 5s) TTL for globally load balanced services.

Note it's the cluster service that will dynamically register and update the record during failovers, hence the TTL being defined by a parameter of the cluster resource.

For this reason you don't need to change the TTL directly on any DNS record nor the 'minimum (default) TTL' property on any zone's SOA.

As I mentioned you're almost certainly seeing the variance in propagation times because of AD replication, unless some of your DCs are set as DNS forwarders on others (which in most situations they shouldn't be, but that's another subject).
yes correct the HostRecordTTL is reduced to 120 seconds.
I am now thinking that the DNS is not the culprit and I have moved my analysis over to the Active Directory Replication especially site to site costing and topology where there are bridgeheads. Will keep you posted.
If it's just two sites the inter-site replication topology should be pretty simple smile

You should just have the default site link containing both sites and set the replication interval down towards 15m

There's generally no need to customise anything on this scale unless you're using dial-up modems to link the sites. One of the AD's I look after has 30 sites on some pretty dodgy WAN links (think factory floors in central America, rural Bangladesh etc) and we replicate every link on 15m.

Greenmantle

Original Poster:

1,405 posts

114 months

Wednesday 12th October 2022
quotequote all
theboss said:
Greenmantle said:
theboss said:
If you've already set RegisterAlllProvidersIPs to 0 because the client won't support multiple records being returned, then I believe you need to set HostRecordTTL to something much lower. It's described in both of the articles linked above.

I would go with something like 60s and confirm you get the expected result. You could drop lower. As first mentioned, I'm used to dealing with very short (often 5s) TTL for globally load balanced services.

Note it's the cluster service that will dynamically register and update the record during failovers, hence the TTL being defined by a parameter of the cluster resource.

For this reason you don't need to change the TTL directly on any DNS record nor the 'minimum (default) TTL' property on any zone's SOA.

As I mentioned you're almost certainly seeing the variance in propagation times because of AD replication, unless some of your DCs are set as DNS forwarders on others (which in most situations they shouldn't be, but that's another subject).
yes correct the HostRecordTTL is reduced to 120 seconds.
I am now thinking that the DNS is not the culprit and I have moved my analysis over to the Active Directory Replication especially site to site costing and topology where there are bridgeheads. Will keep you posted.
If it's just two sites the inter-site replication topology should be pretty simple smile

You should just have the default site link containing both sites and set the replication interval down towards 15m

There's generally no need to customise anything on this scale unless you're using dial-up modems to link the sites. One of the AD's I look after has 30 sites on some pretty dodgy WAN links (think factory floors in central America, rural Bangladesh etc) and we replicate every link on 15m.
sorry I just mentioned the two sites to simplify things
in fact it is 10 sites with two in Azure (all at 15 minutes)
none of the sites are as bad as you described

The AD sites and services is a classic hub and spoke setup with just one DC in each site and I am thinking thats the issue since site A above is actually the hub.

Also the DEFAULTIPSITELINK is still present with all 10 sites in it.
I have been really impressed with these sets of videos
https://www.youtube.com/watch?v=N7yFQx0Jv54