Failover Clustering and Network Load Balancing Team Blog
Hi Cluster Fans,
In this post I will discuss a 2008 and 2008 R2 deployment issue which is sometimes seen, and how to get around it. Let’s say you want to Validate a server, create a cluster or add a node to a cluster. We have a requirement that a node can be a member of only 1 cluster at a time. When you try this operation, you may see a message which tells you that one of the servers you want to use, a non-clustered server, is already a part of a cluster and so you cannot continue…why? The message states "The computer '<Server Name>' is joined to a cluster."
The reason for this is that this node was at some point a cluster node, however when the cluster was destroyed or the node was evicted it did not clean up this node properly. This could happen if the node was offline while the original cluster was destroyed, so other nodes were cleaned up, however this node was never ‘unclustered’.
If you try to connect to that node using Failover Cluster Manager you will see that it attempts to connect, then it will time out after about 5 minutes.
If you check the status of the node using PowerShell, it will report that the node is in a ‘Joining’ state.
If you get into this situation, you can run the following eviction commands directly on the node to properly clean up the cluster components from that node so that you can reuse it in another cluster.
PowerShell (R2 only):
PS> Get-ClusterNode NodeName | Remove-ClusterNode –Force
CMD> cluster.exe node –NodeName /force
To avoid getting to this state, you should make sure that all of your nodes are online when you destroy the cluster or evict a node, so that the cluster can properly clean up the clustering components on every node.
If you plan to evict a node which has resources on it, you should first gracefully move the resources off it to the best nodes (using Move Group or live migration for VMs). If you forget, the cluster will still failover the resources for you, but it may not be in your preferred way.
Program Manager IIClustering & High-Availability
Awesome blog, i have been bugged by this error many . your blog came tor rescue ... G88
Any idea what to do when we receive "An error occured evicting the node NODENAME; the remote system is either paused or in the process of being started"?
In my case, i got the "the computer is joined to a cluster" mesage but none of the machines were clustered, i'm talking about fresh OS installations. It seems that you need to disable the Cluster service on both machines or even reinstall Failover Clustering.
Thank you very much. you saved lot of hours of mine.
Great answer. This one is hard to find.
I get the following response from the power shell:
Get-ClusterNode : The cluster service is not running. Make sure that the service is running on all nodes in the cluste
There are no more endpoints available from the endpoint mapper
At line:1 char:16
+ Get-ClusterNode <<<< <SERVER-NAME>| Remove-ClusterNode -Force
+ CategoryInfo : NotSpecified: (:) [Get-ClusterNode], ClusterCmdletException
+ FullyQualifiedErrorId : Get-ClusterNode,Microsoft.FailoverClusters.PowerShell.GetNodeCommand
Has any one seen this before or have suggestions?
Finally got it working. The CMD version was successful (cluster.exe node –NodeName /force).
Still needed to create a fake cluster and then destroy it to stop the cluster service from complaining every 15 minutes.
If the above fails, remove node from domain and run cluster.exe node –NodeName /force
I believe if using the CMD cluster command, the right syntax is to not use the hyphen i.e
cluster.exe node NodeName /force
Yes, Rick is correct:
cluster.exe node MyNodeNameNoHypen /Force
solution in windows2012 clusters:
open powershell and type:
(where nodename is the name of the previously evicted cluster node name)
None of these answers worked for me, this is very frustrating. I get the same messages as above but none of the answers that were presented did a bit of good, any more suggestions? 2008r2
Ensure the Cluster Service on all nodes is set to disabled. If for whatever reason the service is still set to automatic after destroying the cluster, the validation wizard assumes there is still a running cluster on node X.
This solved my problem