unity-builder/src/model/cloud-runner/providers/k8s/kubernetes-job-spec-factory.ts

172 lines
6.0 KiB
TypeScript
Raw Normal View History

2022-02-01 02:31:20 +00:00
import { V1EnvVar, V1EnvVarSource, V1SecretKeySelector } from '@kubernetes/client-node';
Cloud Runner v0 - Reliable and trimmed down cloud runner (#353) * Update cloud-runner-aws-pipeline.yml * Update cloud-runner-k8s-pipeline.yml * yarn build * yarn build * correct branch ref * correct branch ref passed to target repo * Create k8s-tests.yml * Delete k8s-tests.yml * correct branch ref passed to target repo * correct branch ref passed to target repo * Always describe AWS tasks for now, because unstable error handling * Remove unused tree commands * Use lfs guid sum * Simple override cache push * Simple override cache push and pull override to allow pure cloud storage driven caching * Removal of early branch (breaks lfs caching) * Remove unused tree commands * Update action.yml * Update action.yml * Support cache and input override commands as input + full support custom hooks * Increase k8s timeout * replace filename being appended for unknclear reason * cache key should not contain whitespaces * Always try and deploy rook for k8s * Apply k8s files for rook * Update action.yml * Apply k8s files for rook * Apply k8s files for rook * cache test and action description for kuber storage class * Correct test and implement dependency health check and start * GCP-secret run, cache key * lfs smudge set explicit and undo explicit * Run using external secret provider to speed up input * Update cloud-runner-aws-pipeline.yml * Add nodejs as build step dependency * Add nodejs as build step dependency * Cloud Runner Tests must be specified to capture logs from cloud runner for tests * Cloud Runner Tests must be specified to capture logs from cloud runner for tests * Refactor and cleanup - no async input, combined setup/build, removed github logs for cli runs * Refactor and cleanup - no async input, combined setup/build, removed github logs for cli runs * Refactor and cleanup - no async input, combined setup/build, removed github logs for cli runs * Refactor and cleanup - no async input, combined setup/build, removed github logs for cli runs * Refactor and cleanup - no async input, combined setup/build, removed github logs for cli runs * better defaults for new inputs * better defaults * merge latest * force build update * use npm n to update node in unity builder * use npm n to update node in unity builder * use npm n to update node in unity builder * correct new line * quiet zipping * quiet zipping * default secrets for unity username and password * default secrets for unity username and password * ls active directory before lfs install * Get cloud runner secrets from * Get cloud runner secrets from * Cleanup setup of default secrets * Various fixes * Cleanup setup of default secrets * Various fixes * Various fixes * Various fixes * Various fixes * Various fixes * Various fixes * Various fixes * Various fixes * Various fixes * Various fixes * Various fixes * Various fixes * Various fixes * Various fixes * AWS secrets manager support * less caching logs * default k8s storage class to pd-standard * more readable build commands * Capture aws exit code 1 reliably * Always replace /head from branch * k8s default storage class to standard-rwo * cleanup * further cleanup input * further cleanup input * further cleanup input * further cleanup input * further cleanup input * folder sizes to inspect caching * dir command for local cloud runner test * k8s wait for pending because pvc will not create earlier * prefer k8s standard storage * handle empty string as cloud runner cluster input * local-system is now used for cloud runner test implementation AND correctly unset test CLI input * local-system is now used for cloud runner test implementation AND correctly unset test CLI input * fix unterminated quote * fix unterminated quote * do not share build parameters in tests - in cloud runner this will cause conflicts with resouces of the same name * remove head and heads from branch prefix * fix reversed caching direction of cache-push * fixes * fixes * fixes * cachePull cli * fixes * fixes * fixes * fixes * fixes * order cache test to be first * order cache test to be first * fixes * populate cache key instead of using branch * cleanup cli * garbage-collect-aws cli can iterate over aws resources and cli scans all ts files * import cli methods * import cli files explicitly * import cli files explicitly * import cli files explicitly * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * log parameters in cloud runner parameter test * log parameters in cloud runner parameter test * log parameters in cloud runner parameter test * Cloud runner param test before caching because we have a fast local cache test now * Using custom build path relative to repo root rather than project root * aws-garbage-collect at end of pipeline * aws-garbage-collect do not actually delete anything for now - just list * remove some legacy du commands * Update cloud-runner-aws-pipeline.yml * log contents after cache pull and fix some scenarios with duplicate secrets * log contents after cache pull and fix some scenarios with duplicate secrets * log contents after cache pull and fix some scenarios with duplicate secrets * PR comments * Replace guid with uuid package * use fileExists lambda instead of stat to check file exists in caching * build failed results in core error message * Delete sample.txt
2022-04-10 23:00:37 +00:00
import BuildParameters from '../../../build-parameters';
import { CommandHookService } from '../../services/hooks/command-hook-service';
import CloudRunnerEnvironmentVariable from '../../options/cloud-runner-environment-variable';
import CloudRunnerSecret from '../../options/cloud-runner-secret';
Cloud Runner v0 - Reliable and trimmed down cloud runner (#353) * Update cloud-runner-aws-pipeline.yml * Update cloud-runner-k8s-pipeline.yml * yarn build * yarn build * correct branch ref * correct branch ref passed to target repo * Create k8s-tests.yml * Delete k8s-tests.yml * correct branch ref passed to target repo * correct branch ref passed to target repo * Always describe AWS tasks for now, because unstable error handling * Remove unused tree commands * Use lfs guid sum * Simple override cache push * Simple override cache push and pull override to allow pure cloud storage driven caching * Removal of early branch (breaks lfs caching) * Remove unused tree commands * Update action.yml * Update action.yml * Support cache and input override commands as input + full support custom hooks * Increase k8s timeout * replace filename being appended for unknclear reason * cache key should not contain whitespaces * Always try and deploy rook for k8s * Apply k8s files for rook * Update action.yml * Apply k8s files for rook * Apply k8s files for rook * cache test and action description for kuber storage class * Correct test and implement dependency health check and start * GCP-secret run, cache key * lfs smudge set explicit and undo explicit * Run using external secret provider to speed up input * Update cloud-runner-aws-pipeline.yml * Add nodejs as build step dependency * Add nodejs as build step dependency * Cloud Runner Tests must be specified to capture logs from cloud runner for tests * Cloud Runner Tests must be specified to capture logs from cloud runner for tests * Refactor and cleanup - no async input, combined setup/build, removed github logs for cli runs * Refactor and cleanup - no async input, combined setup/build, removed github logs for cli runs * Refactor and cleanup - no async input, combined setup/build, removed github logs for cli runs * Refactor and cleanup - no async input, combined setup/build, removed github logs for cli runs * Refactor and cleanup - no async input, combined setup/build, removed github logs for cli runs * better defaults for new inputs * better defaults * merge latest * force build update * use npm n to update node in unity builder * use npm n to update node in unity builder * use npm n to update node in unity builder * correct new line * quiet zipping * quiet zipping * default secrets for unity username and password * default secrets for unity username and password * ls active directory before lfs install * Get cloud runner secrets from * Get cloud runner secrets from * Cleanup setup of default secrets * Various fixes * Cleanup setup of default secrets * Various fixes * Various fixes * Various fixes * Various fixes * Various fixes * Various fixes * Various fixes * Various fixes * Various fixes * Various fixes * Various fixes * Various fixes * Various fixes * Various fixes * AWS secrets manager support * less caching logs * default k8s storage class to pd-standard * more readable build commands * Capture aws exit code 1 reliably * Always replace /head from branch * k8s default storage class to standard-rwo * cleanup * further cleanup input * further cleanup input * further cleanup input * further cleanup input * further cleanup input * folder sizes to inspect caching * dir command for local cloud runner test * k8s wait for pending because pvc will not create earlier * prefer k8s standard storage * handle empty string as cloud runner cluster input * local-system is now used for cloud runner test implementation AND correctly unset test CLI input * local-system is now used for cloud runner test implementation AND correctly unset test CLI input * fix unterminated quote * fix unterminated quote * do not share build parameters in tests - in cloud runner this will cause conflicts with resouces of the same name * remove head and heads from branch prefix * fix reversed caching direction of cache-push * fixes * fixes * fixes * cachePull cli * fixes * fixes * fixes * fixes * fixes * order cache test to be first * order cache test to be first * fixes * populate cache key instead of using branch * cleanup cli * garbage-collect-aws cli can iterate over aws resources and cli scans all ts files * import cli methods * import cli files explicitly * import cli files explicitly * import cli files explicitly * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * import cli methods * log parameters in cloud runner parameter test * log parameters in cloud runner parameter test * log parameters in cloud runner parameter test * Cloud runner param test before caching because we have a fast local cache test now * Using custom build path relative to repo root rather than project root * aws-garbage-collect at end of pipeline * aws-garbage-collect do not actually delete anything for now - just list * remove some legacy du commands * Update cloud-runner-aws-pipeline.yml * log contents after cache pull and fix some scenarios with duplicate secrets * log contents after cache pull and fix some scenarios with duplicate secrets * log contents after cache pull and fix some scenarios with duplicate secrets * PR comments * Replace guid with uuid package * use fileExists lambda instead of stat to check file exists in caching * build failed results in core error message * Delete sample.txt
2022-04-10 23:00:37 +00:00
import CloudRunner from '../../cloud-runner';
2022-02-01 02:31:20 +00:00
class KubernetesJobSpecFactory {
static getJobSpec(
command: string,
image: string,
mountdir: string,
workingDirectory: string,
environment: CloudRunnerEnvironmentVariable[],
secrets: CloudRunnerSecret[],
buildGuid: string,
buildParameters: BuildParameters,
secretName: string,
pvcName: string,
jobName: string,
k8s: any,
containerName: string,
Cloud runner develop - Stabilizes kubernetes provider (#531) * fixes * fixes * fixes * fixes * fixes * check for startup message in workflows * check for startup message in workflows * check for startup message in workflows * check for startup message in workflows * check for startup message in workflows * check for startup message in workflows * Update cloud-runner-ci-pipeline.yml * Update cloud-runner-ci-pipeline.yml * no storage class specified * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * updates * log file path * latest develop * log file path * log file path * Update package.json * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * update pipeline to use k3s * version: 'latest' * fixes * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * podname logs for log service * podname logs for log service * podname logs for log service * podname logs for log service * podname logs for log service * podname logs for log service * podname logs for log service * podname logs for log service * podname logs for log service * hashed logs * hashed logs * hashed logs * hashed logs * hashed logs * hashed logs * no wait, just repeat logs * no wait, just repeat logs * remove typo - double await * test fix - kubernetes - name typo in github yaml * test fix - kubernetes - name typo in github yaml * check missing log file * check missing log file * Push to steam test * Push to steam test * Fix path * k8s reliable log hashing * k8s reliable log hashing * k8s reliable log hashing * hashed logging k8s * hashed logging k8s * hashed logging k8s * hashed logging k8s * hashed logging k8s * hashed logging k8s * Include log chunk when task runner sees log update, clarify if we can pull logs from same line or next line * Include log chunk when task runner sees log update, clarify if we can pull logs from same line or next line * Include log chunk when task runner sees log update, clarify if we can pull logs from same line or next line * Include log chunk when task runner sees log update, clarify if we can pull logs from same line or next line * Include log chunk when task runner sees log update, clarify if we can pull logs from same line or next line * Fix exit flow for k8s job * hash comparison logging for log complete in k8s flow * Interrupt k8s logs when logs found * cleanup async parameter * cleanup async parameter * cleanup async parameter * fixes * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix
2024-02-06 23:46:31 +00:00
ip: string = '',
2022-02-01 02:31:20 +00:00
) {
2025-12-06 23:00:43 +00:00
const endpointEnvironmentNames = new Set([
'AWS_S3_ENDPOINT',
'AWS_ENDPOINT',
'AWS_CLOUD_FORMATION_ENDPOINT',
'AWS_ECS_ENDPOINT',
'AWS_KINESIS_ENDPOINT',
'AWS_CLOUD_WATCH_LOGS_ENDPOINT',
'INPUT_AWSS3ENDPOINT',
'INPUT_AWSENDPOINT',
]);
const adjustedEnvironment = environment.map((x) => {
let value = x.value;
if (
typeof value === 'string' &&
2025-12-06 23:00:43 +00:00
endpointEnvironmentNames.has(x.name) &&
(value.startsWith('http://localhost') || value.startsWith('http://127.0.0.1'))
) {
2025-12-06 01:04:14 +00:00
// Replace localhost with host.k3d.internal so pods can access host services
// This simulates accessing external services (like real AWS S3)
value = value
.replace('http://localhost', 'http://host.k3d.internal')
.replace('http://127.0.0.1', 'http://host.k3d.internal');
}
2025-12-06 23:00:43 +00:00
return { name: x.name, value } as CloudRunnerEnvironmentVariable;
});
2022-02-01 02:31:20 +00:00
const job = new k8s.V1Job();
job.apiVersion = 'batch/v1';
job.kind = 'Job';
job.metadata = {
name: jobName,
labels: {
app: 'unity-builder',
buildGuid,
},
};
2025-12-29 16:35:49 +00:00
// Reduce TTL for tests to free up resources faster (default 9999s = ~2.8 hours)
// For CI/test environments, use shorter TTL (300s = 5 minutes) to prevent disk pressure
const jobTTL = process.env['cloudRunnerTests'] === 'true' ? 300 : 9999;
2022-02-01 02:31:20 +00:00
job.spec = {
2025-12-29 16:35:49 +00:00
ttlSecondsAfterFinished: jobTTL,
2022-02-01 02:31:20 +00:00
backoffLimit: 0,
template: {
spec: {
2025-12-05 18:08:29 +00:00
terminationGracePeriodSeconds: 90, // Give PreStopHook (60s sleep) time to complete
2022-02-01 02:31:20 +00:00
volumes: [
{
name: 'build-mount',
persistentVolumeClaim: {
claimName: pvcName,
},
},
],
containers: [
{
ttlSecondsAfterFinished: 9999,
name: containerName,
2022-02-01 02:31:20 +00:00
image,
command: ['/bin/sh'],
args: [
'-c',
`${CommandHookService.ApplyHooksToCommands(`${command}\nsleep 2m`, CloudRunner.buildParameters)}`,
],
2022-02-01 02:31:20 +00:00
workingDir: `${workingDirectory}`,
resources: {
requests: {
memory: `${Number.parseInt(buildParameters.containerMemory) / 1024}G` || '750M',
cpu: Number.parseInt(buildParameters.containerCpu) / 1024 || '1',
2022-02-01 02:31:20 +00:00
},
},
env: [
...adjustedEnvironment.map((x) => {
2022-02-01 02:31:20 +00:00
const environmentVariable = new V1EnvVar();
environmentVariable.name = x.name;
environmentVariable.value = x.value;
2022-02-01 02:31:20 +00:00
return environmentVariable;
}),
...secrets.map((x) => {
const secret = new V1EnvVarSource();
secret.secretKeyRef = new V1SecretKeySelector();
secret.secretKeyRef.key = x.ParameterKey;
secret.secretKeyRef.name = secretName;
const environmentVariable = new V1EnvVar();
environmentVariable.name = x.EnvironmentVariable;
environmentVariable.valueFrom = secret;
2022-02-01 02:31:20 +00:00
return environmentVariable;
}),
Cloud runner develop - Stabilizes kubernetes provider (#531) * fixes * fixes * fixes * fixes * fixes * check for startup message in workflows * check for startup message in workflows * check for startup message in workflows * check for startup message in workflows * check for startup message in workflows * check for startup message in workflows * Update cloud-runner-ci-pipeline.yml * Update cloud-runner-ci-pipeline.yml * no storage class specified * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * updates * log file path * latest develop * log file path * log file path * Update package.json * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * update pipeline to use k3s * version: 'latest' * fixes * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * podname logs for log service * podname logs for log service * podname logs for log service * podname logs for log service * podname logs for log service * podname logs for log service * podname logs for log service * podname logs for log service * podname logs for log service * hashed logs * hashed logs * hashed logs * hashed logs * hashed logs * hashed logs * no wait, just repeat logs * no wait, just repeat logs * remove typo - double await * test fix - kubernetes - name typo in github yaml * test fix - kubernetes - name typo in github yaml * check missing log file * check missing log file * Push to steam test * Push to steam test * Fix path * k8s reliable log hashing * k8s reliable log hashing * k8s reliable log hashing * hashed logging k8s * hashed logging k8s * hashed logging k8s * hashed logging k8s * hashed logging k8s * hashed logging k8s * Include log chunk when task runner sees log update, clarify if we can pull logs from same line or next line * Include log chunk when task runner sees log update, clarify if we can pull logs from same line or next line * Include log chunk when task runner sees log update, clarify if we can pull logs from same line or next line * Include log chunk when task runner sees log update, clarify if we can pull logs from same line or next line * Include log chunk when task runner sees log update, clarify if we can pull logs from same line or next line * Fix exit flow for k8s job * hash comparison logging for log complete in k8s flow * Interrupt k8s logs when logs found * cleanup async parameter * cleanup async parameter * cleanup async parameter * fixes * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix
2024-02-06 23:46:31 +00:00
{ name: 'LOG_SERVICE_IP', value: ip },
2022-02-01 02:31:20 +00:00
],
volumeMounts: [
{
name: 'build-mount',
mountPath: `${mountdir}`,
2022-02-01 02:31:20 +00:00
},
],
lifecycle: {
preStop: {
exec: {
command: [
2025-12-05 13:49:59 +00:00
'/bin/sh',
'-c',
'sleep 60; cd /data/builder/action/steps && chmod +x /steps/return_license.sh 2>/dev/null || true; /steps/return_license.sh 2>/dev/null || true',
2022-02-01 02:31:20 +00:00
],
},
},
},
},
],
restartPolicy: 'Never',
2025-12-29 16:29:44 +00:00
// Add tolerations for CI/test environments to allow scheduling even with disk pressure
// This is acceptable for CI where we aggressively clean up disk space
tolerations: [
{
key: 'node.kubernetes.io/disk-pressure',
operator: 'Exists',
effect: 'NoSchedule',
},
],
2022-02-01 02:31:20 +00:00
},
},
};
Cloud runner develop - Stabilizes kubernetes provider (#531) * fixes * fixes * fixes * fixes * fixes * check for startup message in workflows * check for startup message in workflows * check for startup message in workflows * check for startup message in workflows * check for startup message in workflows * check for startup message in workflows * Update cloud-runner-ci-pipeline.yml * Update cloud-runner-ci-pipeline.yml * no storage class specified * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * updates * log file path * latest develop * log file path * log file path * Update package.json * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * log file path * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * stream logs through standard input and new remote client cli command * update pipeline to use k3s * version: 'latest' * fixes * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * disable aws pipe for now * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * push k8s logs to LOG SERVICE IP * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * tests * podname logs for log service * podname logs for log service * podname logs for log service * podname logs for log service * podname logs for log service * podname logs for log service * podname logs for log service * podname logs for log service * podname logs for log service * hashed logs * hashed logs * hashed logs * hashed logs * hashed logs * hashed logs * no wait, just repeat logs * no wait, just repeat logs * remove typo - double await * test fix - kubernetes - name typo in github yaml * test fix - kubernetes - name typo in github yaml * check missing log file * check missing log file * Push to steam test * Push to steam test * Fix path * k8s reliable log hashing * k8s reliable log hashing * k8s reliable log hashing * hashed logging k8s * hashed logging k8s * hashed logging k8s * hashed logging k8s * hashed logging k8s * hashed logging k8s * Include log chunk when task runner sees log update, clarify if we can pull logs from same line or next line * Include log chunk when task runner sees log update, clarify if we can pull logs from same line or next line * Include log chunk when task runner sees log update, clarify if we can pull logs from same line or next line * Include log chunk when task runner sees log update, clarify if we can pull logs from same line or next line * Include log chunk when task runner sees log update, clarify if we can pull logs from same line or next line * Fix exit flow for k8s job * hash comparison logging for log complete in k8s flow * Interrupt k8s logs when logs found * cleanup async parameter * cleanup async parameter * cleanup async parameter * fixes * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix
2024-02-06 23:46:31 +00:00
if (process.env['CLOUD_RUNNER_MINIKUBE']) {
job.spec.template.spec.volumes[0] = {
name: 'build-mount',
hostPath: {
path: `/data`,
type: `Directory`,
},
};
}
2025-12-29 17:13:18 +00:00
// Set ephemeral-storage request to a reasonable value to prevent evictions
// For tests, use smaller request (1Gi) since k3d nodes have limited disk space (~2.8GB available)
// For production, use 2Gi to allow for larger builds
// The node needs some free space headroom, so requesting too much causes evictions
const ephemeralStorageRequest = process.env['cloudRunnerTests'] === 'true' ? '1Gi' : '2Gi';
job.spec.template.spec.containers[0].resources.requests[`ephemeral-storage`] = ephemeralStorageRequest;
Cloud runner develop - better parameterization of s3 usage, improved async workflow and GC, github checks early integration (#479) * custom steps may leave value undefined, will be pulled from env vars * custom steps may leave value undefined, will be pulled from env vars * custom steps may leave value undefined, will be pulled from env vars * add 3 new premade steps, steam-deploy-client, steam-deploy-project, aws-s3-pull-build * fix * fix * fix * continue building async-workflow support * test checks * test checks * test checks * move github checks within build workflow * async workflow test * async workflow test * async workflow test * async workflow test * async workflow test * async workflow test * async workflow test * async workflow test for aws only * async workflow test for aws only * async workflow test for aws only * async workflow test for aws only * cleanup logging * disable lz4 compression by default * disable lz4 compression by default * AWS BASE STACK for tests * AWS BASE STACK for tests * AWS BASE STACK for tests * AWS BASE STACK for tests * AWS BASE STACK for tests * AWS BASE STACK for tests * disable lz4 compression by default * disable lz4 compression by default * Update github check with aws log * Update github check with aws log * Update github check with aws log * Update github check with aws log * Update github check with aws log * Update github check with aws log * Update github check with aws log * Update github check with aws log * Update github check with aws log * Update github check with aws log * Update github check with aws log * Update github check with aws log * Update github check with aws log * Update github check with aws log * kinesis and subscription filter for logs creation skipped when watchToEnd false * kinesis and subscription filter for logs creation skipped when watchToEnd false * kinesis and subscription filter for logs creation skipped when watchToEnd false * kinesis and subscription filter for logs creation skipped when watchToEnd false * kinesis and subscription filter for logs creation skipped when watchToEnd false * kinesis and subscription filter for logs creation skipped when watchToEnd false * kinesis and subscription filter for logs creation skipped when watchToEnd false * kinesis and subscription filter for logs creation skipped when watchToEnd false * kinesis and subscription filter for logs creation skipped when watchToEnd false * kinesis and subscription filter for logs creation skipped when watchToEnd false * cleanup local pipeline, log aws formation * cleanup local pipeline, log aws formation * cleanup local pipeline, log aws formation * cleanup local pipeline, log aws formation * cleanup local pipeline, log aws formation * cleanup local pipeline, log aws formation * cleanup local pipeline, log aws formation * cleanup local pipeline, log aws formation * cleanup local pipeline, log aws formation * cleanup local pipeline, log aws formation * cleanup local pipeline, log aws formation * cleanup local pipeline, log aws formation * cleanup local pipeline, log aws formation * cleanup local pipeline, log aws formation * cleanup local pipeline, log aws formation * cleanup local pipeline, log aws formation * cleanup local pipeline, log aws formation * cleanup local pipeline, log aws formation * cleanup local pipeline, log aws formation * cleanup local pipeline, log aws formation * cleanup local pipeline, log aws formation * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * async pipeline * workflow * workflow * workflow * workflow * workflow * workflow * workflow * workflow * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3 * parameterize s3
2023-01-20 17:40:57 +00:00
2022-02-01 02:31:20 +00:00
return job;
}
}
export default KubernetesJobSpecFactory;