GOOD
good
On Wednesday February 21, 2024, 17:07 UTC, we deployed a configuration change to one of our services inside of Actions. At 17:14 UTC we noticed an increase in exceptions that impacted approximately 85% of runs at that time.

At 17:18 UTC, we reverted the deployment and our service immediately recovered. During this timeframe, customers may have noticed their workflows failed to trigger or workflows were queued but did not progress.

To prevent this issue in the future we are improving our deployment observability tooling to detect errors earlier in the deployment pipeline.
Feb 21, 5:30 PM UTC
minor
We are investigating reports of degraded performance for Actions
Feb 21, 5:20 PM UTC
good
On Monday February 12th, 2024, 03:00 UTC we deployed a code change to a component of Copilot. At 06:00 UTC we observed an increase in timeouts for code completions impacting 55% of Copilot users at peak across Asia and Europe.

At 12:00 UTC we restarted the nodes, and response durations returned to normal operation until 13:00 UTC when response durations degraded again. At 16:15 UTC we made a configuration change to send traffic to regions that were not exhibiting the errors, which resulted in code completions working fully although completing at a higher latency than normal for some users. At 18:00 UTC we reverted the deploy and response durations returned to normal.

We have added better monitoring to components that failed to decrease resolution times to incidents like this in the future.
Feb 12, 6:14 PM UTC
minor
Code completion response times have returned to normal.
Feb 12, 6:13 PM UTC
minor
We’re still continuing to investigate slower than expected code completions for a subset of users in Europe. Next update to be provided in 30 minutes.
Feb 12, 5:30 PM UTC
minor
We’re continuing to investigate slower than expected code completions for a subset of users in Europe. Next update to be provided in 30 minutes.
Feb 12, 5:01 PM UTC
minor
We're continuing to investigate slower than expected code completions for a subset of users in Europe. Next update to be provided in 30 minutes.
Feb 12, 4:26 PM UTC
minor
Code completions are now working for the impacted users, but completing more slowly than expected. Investigation continues to completely mitigate the issue and restore Copilot code completion functionality to normal.
Feb 12, 3:51 PM UTC
minor
Following mitigation steps taken, we have reduced the impact to a more narrow subset of users. Investigation continues to completely mitigate the issue and restore Copilot code completion functionality.
Feb 12, 3:16 PM UTC
minor
We are continuing to investigate the issues with Copilot code completion currently impacting some users in Europe. We will provide further details as we have them.
Feb 12, 2:36 PM UTC
minor
We have confirmed that this is a reoccurrence of the earlier issue. Impact is currently limited to some European users. The team is working through alternative mitigation strategies to resolve the issue and return normal service.
Feb 12, 2:04 PM UTC
minor
We are investigating reports that the earlier problem with Copilot code completions is reoccurring.
Feb 12, 1:28 PM UTC
minor
We are investigating reports of degraded performance for Copilot
Feb 12, 1:28 PM UTC
good
On Monday February 12th, 2024, 03:00 UTC we deployed a code change to a component of Copilot. At 06:00 UTC we observed an increase in timeouts for code completions impacting 55% of Copilot users at peak across Asia and Europe.

At 12:00 UTC we restarted the nodes, and response durations returned to normal operation until 13:00 UTC when response durations degraded again. At 16:15 UTC we made a configuration change to send traffic to regions that were not exhibiting the errors, which resulted in code completions working fully although completing at a higher latency than normal for some users. At 18:00 UTC we reverted the deploy and response durations returned to normal.

We have added better monitoring to components that failed to decrease resolution times to incidents like this in the future.
Feb 12, 12:39 PM UTC
minor
We are starting to see recovery based on the signals that the team have been monitoring, following mitigation steps being taken. When confident that recovery is complete, we will resolve this incident.
Feb 12, 12:29 PM UTC
minor
We are continuing to investigate increased failure rates for Copilot code completion for some users in Europe.
Feb 12, 12:00 PM UTC
minor
We are investigating reports that GitHub Copilot code completions are not working for some users in Europe.
Feb 12, 11:38 AM UTC
minor
We are investigating reports of degraded performance for Copilot
Feb 12, 11:38 AM UTC
good
On February 9, 2024 between 10:34 UTC and 11:24 UTC, the Webhooks service was degraded and 63% of webhooks were delayed by up to 16 minutes with an average delay of 5 minutes. No webhook deliveries were lost. This was due to an issue with an overloaded backend data store that was unable to process network requests fast enough.

We mitigated the incident by manually failing over traffic to healthy hosts.

We are expanding the capacity of the backing store as well as making the Webhooks service more resilient to this kind of issue.

Feb 09, 11:28 AM UTC
minor
Webhooks is operating normally.
Feb 09, 11:25 AM UTC
minor
We are investigating latency in processing webhooks. Customers may see a delay of around 5 minutes at this time. We will continue to keep users updated on progress towards mitigation.
Feb 09, 11:11 AM UTC
minor
We are investigating reports of degraded performance for Webhooks
Feb 09, 11:09 AM UTC
good
On 2024-02-05, from 09:26 to 13:20 UTC some GitHub customers experienced errors when trying to download raw files. An overloaded server exposed a bug, causing us to return HTTP 500 error codes.

The issue was mitigated by disabling the server and re-routing traffic. We are implementing improvements to our routing logic to more quickly avoid troublesome hosts in the future.

Feb 05, 9:53 AM UTC
minor
We are investigating reports of degraded performance for Git Operations
Feb 05, 9:40 AM UTC
good
An update to our design system caused issues loading dynamic content in the global side navigation menu and in other page-specific sidebar navigation elements. Impacted users saw continuous loading spinners in place of dynamic menu content. User impact lasted from 0:55 UTC to 4:41 UTC on February 1st.

We are working on a number of improvements in response to this incident. We are adding request volume monitors to sidebar navigation endpoints and making changes to our front end escalation paths to improve our time to detect and time to recovery for incidents of this nature. We have also begun work to improve both automated and manual testing for these types of changes in order to prevent recurrence.
Feb 01, 4:41 AM UTC
minor
This issue has been resolved. A reload of your browser window/tab may be required if you continue to experience issues with the collapsable navigation sidebars not loading.
Feb 01, 4:41 AM UTC
minor
We are in the process of deploying a remediation, and expect to see restoration of impacted functionality within the next hour.
Feb 01, 4:21 AM UTC
minor
We have identified an issue that is preventing some navigation components from loading while browsing GitHub.com, and are testing a remediation prior to deployment.
Feb 01, 3:55 AM UTC
minor
We are currently investigating reports of some components of the GitHub.com website not loading for some users.
Feb 01, 3:14 AM UTC
minor
We are currently investigating this issue.
Feb 01, 3:13 AM UTC
good
This incident was the result of an infrastructure change that was made to our load balancers to prepare us for IPv6 enablement of GitHub.com. This change was deployed to a subset of our global edge sites.

The change had the unintended consequence of causing IPv4 addresses to start being passed as an IPv4-mapped IPv6-compatible address to our IP Allow List functionality.

For example 10.1.2.3 became ::ffff:10.1.2.3. While our IP Allow List functionality was developed with IPv6 in mind, it wasn't developed to handle these mapped addresses, and hence started blocking requests as it deemed these to be not in the defined list of allowed addresses. Request error rates peaked at 0.23% of all requests.

We have so far identified three remediation items here:

- Update the IP Allow List functionality to handle IPv4-mapped addresses.
- Audit the rest of our stack to confirm there are no further places this IPv4-mapped IPv6 addresses flaw exists.
- Improve our testing and monitoring processes to better catch these issues in the future.
Jan 31, 2:57 PM UTC
major
We have resolved the issue and confirmed all regions are now operating as expected.
Jan 31, 2:56 PM UTC
major
The fix for ip allow lists is currently rolling out; and we are awaiting confirmation from specific geographic regions.
Jan 31, 2:49 PM UTC
major
We are rolling out a fix to resolve the issues with IP allow lists. This should be resolved shortly.
Jan 31, 2:33 PM UTC
major
Some customers are experiencing issues with IP allow lists.
Jan 31, 2:14 PM UTC
major
We are currently investigating this issue.
Jan 31, 2:14 PM UTC
good
On January 28, 2024, between 01:00 UTC and 14:00 UTC the Avatars service was degraded and could not return all avatar images requested by users, instead it would return a default, fallback avatar image. This incident impacted, at peak time 6% of the requests for viewing Avatars. Requests that were impacted did not prevent the users from continuing to use any GitHub services. This was due to an issue with the Avatars service connecting to a database host.

We mitigated the incident by restarting the malfunctioning hosts that were not able to return the user avatar images.

We are working to improve alerting and monitoring of our services to reduce our time to detection and mitigation.
Jan 28, 2:42 PM UTC
minor
We have mitigated all customer impact. We are no longer serving fallback avatar icons when loading web pages for some customers. We continue to monitor the results.
Jan 28, 2:27 PM UTC
minor
A fix has been implemented for customers seeing the default avatar (octocat) when loading web pages and we are monitoring the results.
Jan 28, 1:57 PM UTC
minor
Some requests for getting the Avatars are returning the fallback response instead of the asked avatar since they are having issues connecting to the Mysql host
Jan 28, 1:20 PM UTC
minor
We are currently investigating this issue.
Jan 28, 1:20 PM UTC
good
On January 23, 2024 at 14:36 UTC, our internal metrics began showing an increase in exceptions originating from our live update service. Live updates to Issues, PRs, Actions, and Projects were failing, but refreshing the page successfully updated page content. We resolved the issue by rolling back a problematic dependency update and reenabled live updates at 18:53 UTC.

We are working to improve alerting and monitoring of our live update service to reduce our time to detection and mitigation.
Jan 23, 6:53 PM UTC
major
Live updates have been restored and the system is operating normally.
Jan 23, 6:53 PM UTC
major
We have identified and are beginning to roll out a potential fix for issues with live updates to our Web UI that power automatic page updates such as the merge box on Pull Requests and updates to Projects.
Users will see actions spinning on the PR merge box for example. As a work around, please refresh the page to get the page updated while we work to fix the issue. We will provide further updates as we continue resolving the issue.
Jan 23, 6:14 PM UTC
major
We are continuing to investigate issues with live updates to our Web UI that power automatic page updates such as the merge box on Pull Requests and updates to Projects.
Users will see actions spinning on the PR merge box for example. As a work around, please refresh the page to get the page updated while we work to fix the issue. We will provide further updates as we continue resolving the issue.
Jan 23, 5:17 PM UTC
major
We are continuing to investigate intermittent issues with live updates to our Web UI that power automatic page updates such as the merge box on Pull Requests and updates to Projects. Users will see actions spinning on the PR merge box for example. As a work around, please refresh the page to get the page updated while we work to fix the issue. We will provide further updates as we continue resolving the issue.
Jan 23, 4:37 PM UTC
major
We are investigating intermittent issues with live updates to our Web UI that power automatic page updates such as the merge box on Pull Requests. Users will intermittently see actions spinning on the PR merge box for example. As a work around, please refresh the page to get the page updated while we work to fix the issue. We will provide further updates as we continue resolving the issue.
Jan 23, 3:59 PM UTC
major
We are currently investigating this issue.
Jan 23, 3:50 PM UTC
good
On 2024-01-21 at 3:38 UTC, we experienced an incident that affected customers using Codespaces. Customers encountered issues creating and resuming Codespaces in multiple regions due to operational issues with compute and storage resources.

Around 25% of customers were impacted, primarily in East US and West Europe. We re-routed traffic for Codespace creations to less impacted regions, but existing Codespaces in these regions may have been unable to resume during the incident.

By 7:30 UTC, we had recovered connectivity to all regions except West Europe, which had an extended recovery time due to increased load in that particular region. The incident was resolved on 2024-01-21 at 9:34 UTC once Codespace creations and resumes were working normally in all regions.
Jan 21, 9:34 AM UTC
major
We continue to work on mitigating the issues with Codespace resumes in West Europe.
Jan 21, 9:32 AM UTC
major
We continue to work on mitigating the issues with Codespace resumes in West Europe.
Jan 21, 8:58 AM UTC
major
We continue to work on mitigating the issues with Codespace resumes in West Europe.
Jan 21, 8:27 AM UTC
major
Codespace creation has fully recovered in all regions. We are still mitigating issues with Codespace resumes in West Europe.
Jan 21, 7:55 AM UTC
major
We continue to see recovery in most regions. We are still working on mitigating the issue impacting customers in West Europe.
Jan 21, 7:24 AM UTC
major
We are continuing to monitor recovery in the affected regions.
Jan 21, 6:50 AM UTC
major
We are seeing signs of recovery in the affected regions.
Jan 21, 6:15 AM UTC
major
Codespaces is experiencing degraded performance. We are continuing to investigate.
Jan 21, 6:15 AM UTC
major
We continue to work on mitigating the underlying the issue impacting Codespaces customers.
Jan 21, 6:00 AM UTC
major
We continue to work on mitigating the underlying the issue impacting Codespaces customers.
Jan 21, 5:24 AM UTC
major
Around 15% of Codespaces customers are unable to create or resume their codespaces. We are continuing efforts to mitigate the issue.
Jan 21, 4:50 AM UTC
major
Codespaces is experiencing degraded availability. We are continuing to investigate.
Jan 21, 4:21 AM UTC
major
We have identified the issue impacting Codespaces customers in multiple regions and are working on mitigation.
Jan 21, 4:15 AM UTC
major
We are experiencing elevated error rates in multiple regions and are currently investigating.
Jan 21, 3:38 AM UTC
major
We are investigating reports of degraded performance for Codespaces
Jan 21, 3:38 AM UTC
good
On 2024-01-21 from 02:05 UTC to 06:19 UTC, GitHub Hosted Runners experienced increased error rates from our main cloud service provider. The errors were initially limited to a single region and we were able to route around the issue by transparently failing over to other regions. However, errors gradually expanded across all regions we deploy to and led to our available compute capacity being exhausted.

During the incident, up to 35% of Actions jobs using Larger Runners and 2% of Actions jobs using GitHub Hosted Runners overall may have experienced intermittent delays in starting. Once the issue was resolved by our cloud service provider, our systems made a full recovery without intervention.

We’re working closely with our service provider to understand the cause of the outage and mitigations we can put in place. We’re also working to increase our resilience to outages of this nature by expanding the regions we deploy to beyond the existing set, especially for Larger Runners.
Jan 21, 6:19 AM UTC
minor
We've applied a mitigation to fix the issues with queuing and running Actions jobs. We are seeing improvements in telemetry and are monitoring for full recovery.
Jan 21, 5:54 AM UTC
minor
We have mitigated the issues impacting Actions Larger Runners. We are still experiencing delays starting normal jobs, and are continuing to investigate.
Jan 21, 5:26 AM UTC
minor
The team has identified the cause of the issues with Actions Larger Runners and has begun mitigation.
Jan 21, 4:53 AM UTC
minor
The team continues to investigate issues with some Actions jobs being queued for a long time and a percentage of jobs failing. We will continue providing updates on the progress towards mitigation.
Jan 21, 4:16 AM UTC
minor
We are investigating reports of degraded performance for Actions
Jan 21, 3:45 AM UTC
good
On January 9 between 12:45 and 13:56 UTC, services in one of our three sites experienced elevated latency for connections. This led to a sustained period of timed out requests across a number of services, including but not limited to our git backend. An average of 5% and max of 10% of requests failed with a 5xx response or timed out during this period. This was caused by a combination of events that led to connection limits being hit in load balancer proxies in that site. An upgrade of hosts was in flight, which meant a subset of proxy hosts were draining and coming offline as the upgrade rolled through the fleet. A config change event also triggered a connection reset across all services in that site. These events are commonplace, but led to a spike in connection establishment events that led to the online proxy hosts hitting the connection limit. Upon further analysis, that limit was lower than it should have been. We have increased that limit to prevent this from recurring. We have also identified improvements to our monitoring of connection limits and behavior and changes to reduce the risk of proxy host upgrades leading to reduced capacity.
Jan 09, 2:40 PM UTC
major
Pull Requests is operating normally.
Jan 09, 2:38 PM UTC
major
Packages is operating normally.
Jan 09, 2:37 PM UTC
major
Git Operations is operating normally.
Jan 09, 2:32 PM UTC
major
Codespaces is operating normally.
Jan 09, 2:27 PM UTC
major
Actions is operating normally.
Jan 09, 2:25 PM UTC
major
API Requests is operating normally.
Jan 09, 2:24 PM UTC
major
Issues is operating normally.
Jan 09, 2:24 PM UTC
major
Webhooks is operating normally.
Jan 09, 2:20 PM UTC
major
Pages is operating normally.
Jan 09, 2:16 PM UTC
major
API Requests is experiencing degraded performance. We are continuing to investigate.
Jan 09, 2:15 PM UTC
major
5xx error rates remain elevated but are seeing a downward trend with many services fully recovered. We will continue monitoring the situation and keep users updated on progress toward full recovery.
Jan 09, 2:15 PM UTC
major
Actions is experiencing degraded performance. We are continuing to investigate.
Jan 09, 2:11 PM UTC
major
We are experiencing an elevated rate of 5xx errors on the order of 1-5% being returned from numerous APIs across the site. The issue has been isolated to one datacenter. We will continue to keep users updated on progress towards mitigation.
Jan 09, 1:39 PM UTC
major
Codespaces is experiencing degraded performance. We are continuing to investigate.
Jan 09, 1:35 PM UTC
major
Packages is experiencing degraded performance. We are continuing to investigate.
Jan 09, 1:34 PM UTC
major
Webhooks is experiencing degraded performance. We are continuing to investigate.
Jan 09, 1:31 PM UTC
major
Git Operations is experiencing degraded performance. We are continuing to investigate.
Jan 09, 1:24 PM UTC
major
Pages is experiencing degraded performance. We are continuing to investigate.
Jan 09, 1:23 PM UTC
major
API Requests is experiencing degraded availability. We are continuing to investigate.
Jan 09, 1:07 PM UTC
major
We are an increase in the rate of 5xx errors on the order of 1-3% being returned from numerous APIs across the site. We will continue to keep users updated on progress towards mitigation.
Jan 09, 1:05 PM UTC
major
Actions is experiencing degraded availability. We are continuing to investigate.
Jan 09, 1:05 PM UTC
major
Pull Requests is experiencing degraded performance. We are continuing to investigate.
Jan 09, 1:04 PM UTC
major
We are investigating reports of degraded performance for Issues and API Requests
Jan 09, 1:02 PM UTC
good
On January 9 between 1:06 and 5:43 UTC, no audit log events were streamed for customers that have that enabled. All events that happened during this time were delivered after the issue was mitigated. The event delivery was failing due to a data shape issue, with a streaming configuration linked to a soft-deleted Enterprise causing a runtime error for a backend service. This was mitigated by removing that configuration. Since the incident, we have improved our detection of these errors and ensured support for similar scenarios.
Jan 09, 5:44 AM UTC
minor
Audit logs streaming is currently unavailable due to a misconfiguration issue. The root cause is understood and our engineers will be updating the broken configuration shortly.
Jan 09, 5:40 AM UTC
minor
We are currently investigating this issue.
Jan 09, 4:59 AM UTC
good
From January 5th to January 8th, Codespaces experienced issues with port forwarding when connecting from a web browser. During this incident 100% of operations to forward ports and connect to a forwarded port failed for customers located in US West and Australia. The cause was due to a cross origin API change in our port forwarding service. After detecting the issue, we mitigated the impact by rolling back the change that caused the discrepancy in the cross origin rules. We have improved the way we are detecting issues like this one including implementing automated alerts during and after the rollout process so that we get signals as early as possible.
Jan 08, 11:41 PM UTC
minor
The initial mitigation we attempted did not fully resolve this issue. We are continuing to investigate and will provide an update as soon as possible.
Jan 08, 10:37 PM UTC
minor
We are engaged on the issue and continuing to work toward a mitigation. Please continue to fallback to VS Code desktop for port forwarding workflows.
Jan 08, 9:55 PM UTC
minor
Mitigation of degraded Codespaces port forwarding in is progress. In the meantime, please continue to use VS Code Desktop for port forwarding.
Jan 08, 9:03 PM UTC
minor
We are actively mitigating degraded performance of Codespaces port forwarding in the web and GitHub CLI. Codespaces port forwarding on VS Code Desktop is unaffected.
Jan 08, 8:22 PM UTC
minor
We are investigating reports of degraded performance for Codespaces
Jan 08, 8:22 PM UTC
good
From 10:30 to 16:15 UTC on January 3rd, our `/settings/emails` page experienced downtime due to an outage in our contact permissions system. During this time customers were unable to update their email subscription preferences.

This system plays a crucial role in managing email subscriptions and the contactability of GitHub customers by our marketing and sales teams. Unfortunately, the outage prevented the generation of links for managing email subscriptions, resulting in the `/settings/emails` page timing out. Consequently, customers were unable to update their email subscription preferences during this period.

During the incident we implemented graceful degradation on the `/settings/email` page to enable certain functionality, albeit with slower response times. By approximately 16:15 UTC on January 3rd, our contact permissions system was fully operational again, and customers could manage their email subscription preferences as usual.

With the issue now resolved, our team is enhancing the `/settings/emails` page resilience to minimize the impact of system outages in the future. This includes better request timeout handling and improved metrics and monitoring in this area of the application.
Jan 03, 5:05 PM UTC
minor
The /settings/email page remains slow, but we are continuing to see improvements.
Jan 03, 4:51 PM UTC
minor
We have resolved the timeouts on our end, but customers will still see slow load times on the /settings/email page.
Jan 03, 4:08 PM UTC
minor
We have identified a potential mitigation for the issue and are rolling it out. Additionally, we are still working with our external provider for a mitigation from them.
Jan 03, 3:13 PM UTC
minor
We are continuing to investigate reports of issues with the /settings/emails page. We are working with the provider to get a mitigation and are exploring options to mitigate on our end.
Jan 03, 2:07 PM UTC
minor
We're seeing issues related to a provider and are engaging them to investigate the issue.
Jan 03, 1:23 PM UTC
minor
We are continuing to investigate reports of issues with the /settings/emails page. We will continue to keep users updated on progress towards mitigation.
Jan 03, 12:59 PM UTC
minor
We're seeing problems loading the /settings/emails page.
Jan 03, 12:42 PM UTC
minor
We are currently investigating this issue.
Jan 03, 12:42 PM UTC
good
On December 26, 2023, GitHub received a report through our Bug Bounty Program demonstrating a vulnerability which, if exploited, allowed access to credentials within a production container. We fixed this vulnerability on GitHub.com the same day and began rotating all potentially exposed credentials. Through this process we found some flaws in how we rotate certain credentials and are working on improving our credential rotation process. More detail can be found on our blog: https://github.blog/2024-01-16-rotating-credentials-for-github-com-and-new-ghes-patches/
Dec 29, 9:21 PM UTC
minor
We are in the process of reverting a change that introduced these failures.
Dec 29, 9:09 PM UTC
minor
We’re investigating reports of increased failure rates for migrations with GitHub Enterprise Importer and exports using the Organization Migrations REST API.
Dec 29, 8:05 PM UTC
minor
We are currently investigating this issue.
Dec 29, 8:05 PM UTC
good
On December 26, 2023, GitHub received a report through our Bug Bounty Program demonstrating a vulnerability which, if exploited, allowed access to credentials within a production container. We fixed this vulnerability on GitHub.com the same day and began rotating all potentially exposed credentials. Through this process we found some flaws in how we rotate certain credentials and are working on improving our credential rotation process. More detail can be found on our blog: https://github.blog/2024-01-16-rotating-credentials-for-github-com-and-new-ghes-patches/
Dec 29, 6:33 PM UTC
minor
With a mitigation deploying, we see recovery in most API requests and are continuing to monitor full rollout and mitigation.
Dec 29, 6:31 PM UTC
minor
Secret Scanning and potentially other APIs are returning 500 error responses. We're working on a mitigation.
Dec 29, 6:21 PM UTC
minor
We are investigating reports of degraded performance for API Requests
Dec 29, 6:17 PM UTC
good
On December 26, 2023, GitHub received a report through our Bug Bounty Program demonstrating a vulnerability which, if exploited, allowed access to credentials within a production container. We fixed this vulnerability on GitHub.com the same day and began rotating all potentially exposed credentials. Through this process we found some flaws in how we rotate certain credentials and are working on improving our credential rotation process. More detail can be found on our blog: https://github.blog/2024-01-16-rotating-credentials-for-github-com-and-new-ghes-patches/
Dec 29, 2:04 AM UTC
minor
Users without an existing valid session are unable to login and will see an error page. We are working on a mitigation.
Dec 29, 1:41 AM UTC
minor
We are currently investigating this issue.
Dec 29, 1:41 AM UTC
good
On December 26, 2023, GitHub received a report through our Bug Bounty Program demonstrating a vulnerability which, if exploited, allowed access to credentials within a production container. We fixed this vulnerability on GitHub.com the same day and began rotating all potentially exposed credentials. Through this process we found some flaws in how we rotate certain credentials and are working on improving our credential rotation process. More detail can be found on our blog: https://github.blog/2024-01-16-rotating-credentials-for-github-com-and-new-ghes-patches/
Dec 28, 6:57 AM UTC
major
We have deployed a fix and email service should be restored shortly.
Dec 28, 6:49 AM UTC
major
We are experiencing issues sending some pull request, actions and other notification emails. Some emails may not be received as the result of activity on GitHub. Web and mobile push notifications are not affected.
Dec 28, 6:48 AM UTC
major
We are currently investigating this issue.
Dec 28, 6:43 AM UTC
good
On December 26, 2023, GitHub received a report through our Bug Bounty Program demonstrating a vulnerability which, if exploited, allowed access to credentials within a production container. We fixed this vulnerability on GitHub.com the same day and began rotating all potentially exposed credentials. Through this process we found some flaws in how we rotate certain credentials and are working on improving our credential rotation process. More detail can be found on our blog: https://github.blog/2024-01-16-rotating-credentials-for-github-com-and-new-ghes-patches/
Dec 27, 7:25 PM UTC
minor
A recent update to an Action that GitHub Pages deployer service relies on impacted that service, and was corrected and redeployed.
Dec 27, 7:25 PM UTC
minor
We've identified the cause of some Pages errors and are deploying a mitigating fix now.
Dec 27, 7:12 PM UTC
minor
We continue to investigate issues with Pages, and will continue to keep users updated on progress towards mitigation.
Dec 27, 7:01 PM UTC
minor
Pages workflow builds which use the actions actions/upload-pages-artifact@v3 and actions/deploy-pages@v4 are currently failing.
Dec 27, 6:29 PM UTC
minor
We are investigating reports of degraded performance for Pages
Dec 27, 6:29 PM UTC
good
On December 26, 2023, GitHub received a report through our Bug Bounty Program demonstrating a vulnerability which, if exploited, allowed access to credentials within a production container. We fixed this vulnerability on GitHub.com the same day and began rotating all potentially exposed credentials. Through this process we found some flaws in how we rotate certain credentials and are working on improving our credential rotation process. More detail can be found on our blog: https://github.blog/2024-01-16-rotating-credentials-for-github-com-and-new-ghes-patches/
Dec 27, 4:00 AM UTC
major
We have applied a fix and most customers should now be seeing recovery.
Dec 27, 3:57 AM UTC
major
Customers using Codespaces will be unable to connect to existing codespaces, create new ones, or export. We have identified the issue and are working on a remediation.
Dec 27, 3:14 AM UTC
major
Codespaces is experiencing degraded availability. We are continuing to investigate.
Dec 27, 3:09 AM UTC
major
Codespace services are experiencing connection issues
Dec 27, 2:53 AM UTC
major
We are investigating reports of degraded availability for Codespaces
Dec 27, 2:53 AM UTC
good
On December 26, 2023, GitHub received a report through our Bug Bounty Program demonstrating a vulnerability which, if exploited, allowed access to credentials within a production container. We fixed this vulnerability on GitHub.com the same day and began rotating all potentially exposed credentials. Through this process we found some flaws in how we rotate certain credentials and are working on improving our credential rotation process. More detail can be found on our blog: https://github.blog/2024-01-16-rotating-credentials-for-github-com-and-new-ghes-patches/
Dec 27, 3:06 AM UTC
minor
Codespaces is operating normally.
Dec 27, 3:06 AM UTC
minor
We are investigating reports of degraded performance for Codespaces and API Requests
Dec 27, 2:51 AM UTC