<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Advanced on Agones</title>
    <link>/site/docs/advanced/</link>
    <description>Recent content in Advanced on Agones</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <lastBuildDate>Thu, 03 Jan 2019 05:44:55 +0000</lastBuildDate><atom:link href="/site/docs/advanced/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>System Diagram</title>
      <link>/site/docs/advanced/system-diagram/</link>
      <pubDate>Thu, 18 Apr 2024 00:00:00 +0000</pubDate>
      
      <guid>/site/docs/advanced/system-diagram/</guid>
      <description>Agones Control Plane The Agones Control Plane consists of 4 Deployments:
NAME READY UP-TO-DATE AVAILABLE AGE agones-allocator 3/3 3 3 40d agones-controller 2/2 2 2 40d agones-extensions 2/2 2 2 40d agones-ping 2/2 2 2 40d agones-allocator agones-allocator provides a gRPC/REST service that translates allocation requests into GameServerAllocations. See Allocator Service for more information.
agones-controller agones-controller maintains various control loops for all Agones CRDs (GameServer, Fleet, etc.). A single leader-elected Pod of the Deployment is active at any given time (see High Availability).</description>
    </item>
    
    <item>
      <title>Scheduling and Autoscaling</title>
      <link>/site/docs/advanced/scheduling-and-autoscaling/</link>
      <pubDate>Thu, 03 Jan 2019 05:45:05 +0000</pubDate>
      
      <guid>/site/docs/advanced/scheduling-and-autoscaling/</guid>
      <description>Cluster Autoscaler Kubernetes has a cluster node autoscaler that works with a wide variety of cloud providers.
The default scheduling strategy (Packed) is designed to work with the Kubernetes autoscaler out of the box.
The autoscaler will automatically add Nodes to the cluster when GameServers don&amp;rsquo;t have room to be scheduled on the clusters, and then scale down when there are empty Nodes with no GameServers running on them.
This means that scaling Fleets up and down can be used to control the size of the cluster, as the cluster autoscaler will adjust the size of the cluster to match the resource needs of one or more Fleets running on it.</description>
    </item>
    
    <item>
      <title>High Availability Agones</title>
      <link>/site/docs/advanced/high-availability-agones/</link>
      <pubDate>Fri, 10 Feb 2023 00:00:00 +0000</pubDate>
      
      <guid>/site/docs/advanced/high-availability-agones/</guid>
      <description>High Availability for Agones Controller The agones-controller responsibility is split up into agones-controller, which enacts the Agones control loop, and agones-extensions, which acts as a service endpoint for webhooks and the allocation extension API. Splitting these responsibilities allows the agones-extensions pod to be horizontally scaled, making the Agones control plane highly available and more resiliant to disruption.
Multiple agones-controller pods enabled, with a primary controller selected via leader election. Having multiple agones-controller minimizes downtime of the service from pod disruptions such as deployment updates, autoscaler evictions, and crashes.</description>
    </item>
    
    <item>
      <title>Controlling Disruption</title>
      <link>/site/docs/advanced/controlling-disruption/</link>
      <pubDate>Tue, 24 Jan 2023 20:15:26 +0000</pubDate>
      
      <guid>/site/docs/advanced/controlling-disruption/</guid>
      <description>Disruption in Kubernetes A Pod in Kubernetes may be disrupted for involuntary reasons, e.g. hardware failure, or voluntary reasons, such as when nodes are drained for upgrades.
By default, Agones assumes your game server should never be disrupted voluntarily and configures the Pod appropriately - but this isn&amp;rsquo;t always the ideal setting. Here we discuss how Agones allows you to control the two most significant sources of voluntary Pod evictions, node upgrades and Cluster Autoscaler, using the eviction API on the GameServer object.</description>
    </item>
    
    <item>
      <title>Limiting CPU &amp; Memory</title>
      <link>/site/docs/advanced/limiting-resources/</link>
      <pubDate>Thu, 03 Jan 2019 05:45:15 +0000</pubDate>
      
      <guid>/site/docs/advanced/limiting-resources/</guid>
      <description>As a short description:
CPU Requests are limits that are applied when there is CPU congestion, and as such can burst above their set limits. CPU Limits are hard limits on how much CPU time the particular container gets access to. This is useful for game servers, not just as a mechanism to distribute compute resources evenly, but also as a way to advice the Kubernetes scheduler how many game server processes it is able to fit into a given node in the cluster.</description>
    </item>
    
    <item>
      <title>Out of Cluster Dev Server</title>
      <link>/site/docs/advanced/out-of-cluster-dev-server/</link>
      <pubDate>Sat, 22 Jul 2023 17:21:25 +0000</pubDate>
      
      <guid>/site/docs/advanced/out-of-cluster-dev-server/</guid>
      <description>This section builds upon the topics discussed in local SDK Server, Local Game Server, and GameServer allocation (discussed here, here, and here). Having a firm understanding of those concepts will be necessary for running an &amp;ldquo;out of cluster&amp;rdquo; local server.
Running an &amp;ldquo;out of cluster&amp;rdquo; dev server combines the best parts of local debugging and being a part of a cluster. A developer will be able to run a custom server binary on their local machine, even within an IDE with breakpoints.</description>
    </item>
    
    <item>
      <title>Allocator Service</title>
      <link>/site/docs/advanced/allocator-service/</link>
      <pubDate>Tue, 19 May 2020 05:45:05 +0000</pubDate>
      
      <guid>/site/docs/advanced/allocator-service/</guid>
      <description>To allocate a game server, Agones provides a gRPC and REST service with mTLS authentication, called agones-allocator that can be used instead of GameServerAllocations .
Both gRPC and REST are accessible through a Kubernetes service that can be externalized using a load balancer. By default, gRPC and REST are served from the same port. However, either service can be disabled or the services can be served from separate ports using the helm configuration.</description>
    </item>
    
    <item>
      <title>Multi-cluster Allocation</title>
      <link>/site/docs/advanced/multi-cluster-allocation/</link>
      <pubDate>Fri, 25 Oct 2019 05:45:05 +0000</pubDate>
      
      <guid>/site/docs/advanced/multi-cluster-allocation/</guid>
      <description>This implementation of multi-cluster allocation was written before managed and open source multi-cluster Service Meshes such as Istio and Linkerd, were available and so widely utilised.
We now recommend implementing a Service Mesh in each of your Agones clusters and backend services cluster to provide a multi-cluster allocation endpoint that points to each Agones cluster&amp;rsquo;s Allocation Service.
Service Mesh specific projects provide far more powerful features, easier configuration and maintenance, and we expect that they will be something that you will likely be installing in your multi-cluster architecture anyway.</description>
    </item>
    
    <item>
      <title>GameServer Pod Service Accounts</title>
      <link>/site/docs/advanced/service-accounts/</link>
      <pubDate>Thu, 14 Mar 2019 04:30:37 +0000</pubDate>
      
      <guid>/site/docs/advanced/service-accounts/</guid>
      <description>Default Settings By default, Agones sets up service accounts and sets them appropriately for the Pods that are created for GameServers.
Since Agones provides GameServer Pods with a sidecar container that needs access to Agones Custom Resource Definitions, Pods are configured with a service account with extra RBAC permissions to ensure that it can read and modify the resources it needs.
Since service accounts apply to all containers in a Pod, Agones will automatically overwrite the mounted key for the service account in the container that is running the dedicated game server in the backing Pod.</description>
    </item>
    
  </channel>
</rss>
