US20190095253A1 - Cluster updating using temporary update-monitor pod - Google Patents
Cluster updating using temporary update-monitor pod Download PDFInfo
- Publication number
- US20190095253A1 US20190095253A1 US15/713,071 US201715713071A US2019095253A1 US 20190095253 A1 US20190095253 A1 US 20190095253A1 US 201715713071 A US201715713071 A US 201715713071A US 2019095253 A1 US2019095253 A1 US 2019095253A1
- Authority
- US
- United States
- Prior art keywords
- cluster
- application
- update
- initialization
- monitor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000004519 manufacturing process Methods 0.000 claims abstract description 15
- 238000000034 method Methods 0.000 claims description 15
- 238000012546 transfer Methods 0.000 description 12
- 238000009826 distribution Methods 0.000 description 3
- 238000010367 cloning Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/60—Software deployment
- G06F8/65—Updates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/60—Software deployment
- G06F8/65—Updates
- G06F8/656—Updates while running
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0813—Configuration setting characterised by the conditions triggering a change of settings
- H04L41/082—Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/085—Retrieval of network configuration; Tracking network configuration history
- H04L41/0859—Retrieval of network configuration; Tracking network configuration history by keeping history of different configuration generations or by rolling back to previous configuration versions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5041—Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
- H04L41/5054—Automatic deployment of services triggered by the service manager, e.g. service implementation by automatic configuration of network components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5058—Service discovery by the service manager
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/04—Processing captured monitoring data, e.g. for logfile generation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/10—Active monitoring, e.g. heartbeat, ping or trace-route
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/34—Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0876—Aspects of the degree of configuration automation
- H04L41/0886—Fully automatic configuration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5003—Managing SLA; Interaction between SLA and QoS
- H04L41/5009—Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
Definitions
- a “distributed application” is an application for which a processing workload is shared by plural instances of the application executing in parallel.
- the instances can include instances of different software application programs and/or multiple instances of a given software application program.
- the instances can run on different hardware, so that if one instance fails due to a fault in the underlying hardware, other instances can assume its portion of the workload.
- the instances may operate independently of each other, or they may cooperate with each other. In the latter case, the instances may be required to communicate with each other, e.g., to synchronize data they are collecting and/or generating.
- the application instances can be arranged in pods of a Kubernetes cluster, wherein each pod can hold one or more application instances.
- Application instances in a Kubernetes cluster can easily communicate with other application instances in the cluster, e.g., to synchronize data, while communication with entities outside the cluster can be controlled, e.g., for security purposes.
- FIG. 1 is a schematic diagram of a computing environment including a distributed application to be updated.
- FIG. 2 is a schematic diagram of an application stack applicable to application instances represented in FIG. 1 .
- FIG. 3 is a flow chart of an update process for a cluster of distributed application instances.
- the present invention provides for using a temporary update-monitor pod in a cluster to detect completion of post-launch initialization of an update to a distributed application executing in a cluster.
- a pod is a unit that can be controlled by a cluster and that can host one or more application instances.
- the cluster can, for example, be a Kubernetes cluster.
- “update” encompasses replacing an application instance with another application instance, e.g., to upgrade or to downgrade an application.
- Updating a distributed application can involve updating each instance of one or more of its application programs. Less than all instances may be updated: for example, it may be that only one tier of a multi-tier distributed application is to be updated.
- a production workload can be switched from the original (pre-update) versions to the updated versions.
- some post-launch initialization is required before the workload can be switched. For example, it may be necessary to transfer data generated and collected by the original instances to the updated instances before the updated instances can assume the workload.
- a problem was recognized in updating a distributed application requiring a post-launch initialization in a Kubernetes context. While a Kubernetes cluster is designed to detect and report when a launched application is ready to run, a Kubernetes cluster may not be designed to detect and report completion of a post-launch initialization. This presents a challenge to an external (to the cluster) entity managing the update and wanting to switch the workload once post-launch initialization is complete.
- the present invention creates a temporary pod within the cluster that can communicate with the pods hosting application instances to determine the status of post-launch initialization.
- the temporary pod's existence can be limited to a number of seconds, after which the resources it consumed are freed for other uses and any security issues it raised are eliminated.
- an external script or other entity can use a Kubernetes API to command a Kubernetes manager to create a temporary update-monitoring pod, e.g., a lightweight http (Hypertext Transfer Protocol) client via a command-line interface (CLI) to communicate with other pods (e.g., those holding application instances) to be updated.
- the Kubernetes manager is not in the cluster, but can communicate with each of the pods in the cluster.
- the Kubernetes manager can be accessed via its API, without exposing the application instances in the cluster to additional security risks.
- the temporary pod can send http requests to initiate post-launch processing, monitor the status of initialization and report the returned results in a log, which can be accessed by the external entity. Once the log has been read by the external entity, the temporary pod can be destroyed, freeing resources and eliminating it as a possible security vulnerability. Note that while the foregoing describes the invention in terms of Kubernetes clusters and pods, the invention can apply to clusters other than Kubernetes and to included application-instance hosts other than pods.
- a computing environment 100 includes a Kubernetes cluster 102 and a updater device 104 for managing cluster 102 via a cluster manager 110 .
- Kubernetes is an open-source system for automating deployment, scaling and management of containerized applications that can host process containers, e.g., Docker containers (promoted by Docker, Inc.).
- Cluster 102 runs on a multi-machine computer system 106 including hardware 108 , e.g., processors, communications devices, and non-transitory media.
- the non-transitory media is encoded with code that, when executed by the processors, defines cluster 102 .
- Cluster 102 contains instances 120 of an original version of a distributed application, in this case, a cloud-management application that is itself hosted in a public cloud. Instances 120 are running on respective pods 142 and 144 . More generally, there can be any number of original distributed application instances running on pods. Pods 142 and 144 are running “original” instances in that they are to be updated, in other words, replaced by updated versions of the distributed application.
- Original distributed application instances 120 handle respective streams of a production workload 126 . In the course of handling this workload, original distributed application instances 120 produce and update respective sets app data 130 and 132 . Each instance 122 , 124 includes a respective data synchronizer 134 , 136 so that the application instances will have the same set of data. That way, if one of the instances fails, little or no data will be lost, as the other instance will have a complete set.
- Applications are arranged in pods within a cluster.
- cluster-manager 110 that runs on a “master” node that is separate from the “minion” nodes on which the pods run) for managing cluster 102 , and minion nodes for containing applications and their containers.
- application instance 122 is in pod 142 and application instance 124 is in pod 144 .
- each pod can hold one or more containers, each of which typically is running an instance of a specific version of an application.
- each application stack 200 would include hardware 202 , a hypervisor 204 for supporting virtual machines, a virtual machine 206 , a guest operating system 208 , a pod 210 , e.g., Docker, a container 212 , and an application 214 .
- application instance 122 is contained in a process container 146 within pod 142
- application instance 124 is contained in a process container 148 within pod 144
- Application instances 122 and 124 include respective data synchronizers 150 and 152 for synchronizing application data 130 with application data 132
- Application instances 122 and 124 are running the same version (e.g., based on the same Docker image) of the original distributed application 120 .
- Updater device 104 can be any form of computer, e.g., a laptop, tablet, or a smartphone.
- Updater device 104 includes a script 160 designed to, when executed by updater device 104 , cause cluster 102 to be updated. More specifically, update script 160 , when executed, can send commands to cluster-manager 110 that can create a new (and separate) version of the distributed application instances 170 to be created, and to initiate migration of all application data associated with instance 120 .
- original application instance 122 is updated to and replaced by updated application instance 172 .
- original application instance 124 is updated to and replaced by an updated application instance 174 .
- original instance 122 could be updated and the update cloned to yield plural updated instances.
- the number of instances could increase, decrease, or stay the same.
- updated distributed application instances 170 include application instances 172 and 174 .
- Updated application instance 172 resides in a process container 176 of a pod 178
- updated application instance 174 resides in a process container 180 of a pod 182 .
- Updated application instance 172 includes a data synchronizer 184
- updated application instance 174 includes a data synchronizer 186 .
- the respective application data sets 188 and 190 are initially empty.
- a completely new a separate version of an application replaces its predecessor.
- updating the application instances can involve cloning their container images, apply updates to the clones, and then launching the updated clones. In many cases, once the updated application instances are all launched, production can be switched from the original instances to the updated instances.
- updated distributed application instances 172 and 174 lack the application data collected and generated by the original distribution application instances 122 and 124 . For this reason, the updated application instances 172 and 174 are not ready to handle production workloads upon launch. Therefore, the production workload cannot be switched in response to an indication that the updated distribution application instances 172 and 174 have been launched.
- original distribution application instances 120 continue to handle production workloads while application data is transferred to the updated instances.
- application instances 122 and 124 can synchronize with each other and the updated application instances until all four (original and updated) app data instances are synchronized.
- the app data from the original distributed application instances can be checkpointed, e.g., their respective app data can be transferred to the respective updated application instances.
- production workloads can be switched to the updated application instances on a staggered basis.
- cluster 102 prevents entities outside of the cluster (e.g., update script 160 ) from querying pods directly to determine when the data transfer is complete. It would be possible to provide external access to the pods, but this external access might pose a security risk.
- update script 160 authenticates itself to cluster-manager 110 and commands it, via an API of cluster manager, to create a temporary update-monitor pod 192 within cluster 102 .
- the temporary update-monitor pod can include a lightweight http client 194 that uses a command-line interface (CLI) to communicate with other pods of cluster 102 .
- CLI command-line interface
- this pod 192 is internal to the cluster 102 , it can locate and communicate with other pods in the cluster. It can thus poll the original and updated application instances at a specified time to determine if the application data transfer is complete. The results of the polling can be entered into an update status log 196 , which can be read by external script 160 .
- update status log 196 indicates that the transfer has been successfully completed, then production transactions can be switched to the updated instances.
- the script 160 can also command the cluster 110 pod 162 to destroy the original distributed application instances and the temporary update-monitor pod to free up resources and remove any security risk that it might represent if allowed to persist.
- the update status log 196 can indicate that the transferred failed, e.g., one of the pods involved in the transfer indicates that it could not complete the transfer, or that there was a time-out, e.g., a status query was not answered.
- the temporary pod can be destroyed and, if appropriate, the script can command cluster manager 110 to create a new temporary update-monitor pod. Because creation of the temporary pod requires authentication and because the lifetime of the temporary pod is limited, e.g., less than a minute, it poses little or no security risk.
- a process 300 for updating a distributed application in a cluster is flow charted in FIG. 3 .
- an update script is launched outside the cluster.
- the script causes updated instances of the distributed application to be launched.
- preparation for the launch can included cloning container images for the original instances, applying updates to the cloned images, and launching the updated cloned images.
- the cluster can report, at 303 , to the script that the new version has been successfully created and is running.
- post-launch initialization e.g., data transfer from the original instances to the updated instances, can begin.
- the script commands a cluster manager to create a temporary update-monitor pod.
- the temporary update-monitor pod includes a lightweight web (http) client that sends, at 306 , http commands to the other pods in the cluster to initiate and monitor the status of the data transfer.
- the temporary update-monitor pod polls the other pods in the cluster.
- the HTTP responses are written to a log file at 307 .
- the script waits for the update-monitor pod to complete and then reads the log at 308 . In this way, the script can obtain information regarding the data transfer via the temporary update-monitor pod that it could not obtain directly on its own.
- the script causes the production workflow to be switched from the original distributed application instances to the updated distributed application instances.
- the script commands, at 310 , the cluster manager to destroy the original pods (containing the original application instances) and the temporary update-monitor pod. This marks an end to update process 300 .
- the script commands the cluster manager to destroy the temporary update-monitor pod and the resources associated with the new (failed) application version so that it does not persist.
- Process 300 then can return to 305 , to create a new temporary update-monitor pod. In this way, no temporary update-monitor pod can be too transitory to expose a security risk. For example, the timing can be such that each temporary update-monitor pod persists for less than a minute.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Security & Cryptography (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Cardiology (AREA)
- General Health & Medical Sciences (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
- A “distributed application” is an application for which a processing workload is shared by plural instances of the application executing in parallel. The instances can include instances of different software application programs and/or multiple instances of a given software application program. The instances can run on different hardware, so that if one instance fails due to a fault in the underlying hardware, other instances can assume its portion of the workload. The instances may operate independently of each other, or they may cooperate with each other. In the latter case, the instances may be required to communicate with each other, e.g., to synchronize data they are collecting and/or generating.
- The application instances can be arranged in pods of a Kubernetes cluster, wherein each pod can hold one or more application instances. Application instances in a Kubernetes cluster can easily communicate with other application instances in the cluster, e.g., to synchronize data, while communication with entities outside the cluster can be controlled, e.g., for security purposes.
-
FIG. 1 is a schematic diagram of a computing environment including a distributed application to be updated. -
FIG. 2 is a schematic diagram of an application stack applicable to application instances represented inFIG. 1 . -
FIG. 3 is a flow chart of an update process for a cluster of distributed application instances. - The present invention provides for using a temporary update-monitor pod in a cluster to detect completion of post-launch initialization of an update to a distributed application executing in a cluster. Herein, a pod is a unit that can be controlled by a cluster and that can host one or more application instances. The cluster can, for example, be a Kubernetes cluster. Herein, “update” encompasses replacing an application instance with another application instance, e.g., to upgrade or to downgrade an application.
- Updating a distributed application can involve updating each instance of one or more of its application programs. Less than all instances may be updated: for example, it may be that only one tier of a multi-tier distributed application is to be updated. In some cases, once all updated instances are launched and executing, a production workload can be switched from the original (pre-update) versions to the updated versions. In other cases, some post-launch initialization is required before the workload can be switched. For example, it may be necessary to transfer data generated and collected by the original instances to the updated instances before the updated instances can assume the workload.
- In the course of the present invention, a problem was recognized in updating a distributed application requiring a post-launch initialization in a Kubernetes context. While a Kubernetes cluster is designed to detect and report when a launched application is ready to run, a Kubernetes cluster may not be designed to detect and report completion of a post-launch initialization. This presents a challenge to an external (to the cluster) entity managing the update and wanting to switch the workload once post-launch initialization is complete.
- While it is possible to allow an external script access to cluster pods and/or the application instances they host, such an arrangement could introduce unacceptable security issues. To avoid these issues, the present invention creates a temporary pod within the cluster that can communicate with the pods hosting application instances to determine the status of post-launch initialization. The temporary pod's existence can be limited to a number of seconds, after which the resources it consumed are freed for other uses and any security issues it raised are eliminated.
- In accordance with the present invention, an external script or other entity can use a Kubernetes API to command a Kubernetes manager to create a temporary update-monitoring pod, e.g., a lightweight http (Hypertext Transfer Protocol) client via a command-line interface (CLI) to communicate with other pods (e.g., those holding application instances) to be updated. The Kubernetes manager is not in the cluster, but can communicate with each of the pods in the cluster. The Kubernetes manager can be accessed via its API, without exposing the application instances in the cluster to additional security risks.
- The temporary pod can send http requests to initiate post-launch processing, monitor the status of initialization and report the returned results in a log, which can be accessed by the external entity. Once the log has been read by the external entity, the temporary pod can be destroyed, freeing resources and eliminating it as a possible security vulnerability. Note that while the foregoing describes the invention in terms of Kubernetes clusters and pods, the invention can apply to clusters other than Kubernetes and to included application-instance hosts other than pods.
- As shown in
FIG. 1 , acomputing environment 100 includes a Kubernetes cluster 102 and aupdater device 104 for managing cluster 102 via a cluster manager 110. Kubernetes is an open-source system for automating deployment, scaling and management of containerized applications that can host process containers, e.g., Docker containers (promoted by Docker, Inc.). Cluster 102 runs on amulti-machine computer system 106 includinghardware 108, e.g., processors, communications devices, and non-transitory media. The non-transitory media is encoded with code that, when executed by the processors, defines cluster 102. - Cluster 102 contains
instances 120 of an original version of a distributed application, in this case, a cloud-management application that is itself hosted in a public cloud.Instances 120 are running onrespective pods 142 and 144. More generally, there can be any number of original distributed application instances running on pods.Pods 142 and 144 are running “original” instances in that they are to be updated, in other words, replaced by updated versions of the distributed application. - Original
distributed application instances 120 handle respective streams of aproduction workload 126. In the course of handling this workload, originaldistributed application instances 120 produce and update respectivesets app data - Applications are arranged in pods within a cluster. In the illustrated case, there is a cluster-manager 110 (that runs on a “master” node that is separate from the “minion” nodes on which the pods run) for managing cluster 102, and minion nodes for containing applications and their containers. Thus, application instance 122 is in pod 142 and application instance 124 is in
pod 144. More generally, each pod can hold one or more containers, each of which typically is running an instance of a specific version of an application. Thus, as represented inFIG. 2 , eachapplication stack 200 would include hardware 202, a hypervisor 204 for supporting virtual machines, a virtual machine 206, aguest operating system 208, a pod 210, e.g., Docker, a container 212, and an application 214. - Correspondingly, application instance 122,
FIG. 1 , is contained in a process container 146 within pod 142, while application instance 124 is contained in aprocess container 148 withinpod 144. Application instances 122 and 124 include respective data synchronizers 150 and 152 for synchronizingapplication data 130 withapplication data 132. Application instances 122 and 124 are running the same version (e.g., based on the same Docker image) of the originaldistributed application 120. -
Updater device 104,FIG. 1 , can be any form of computer, e.g., a laptop, tablet, or a smartphone.Updater device 104 includes ascript 160 designed to, when executed byupdater device 104, cause cluster 102 to be updated. More specifically, updatescript 160, when executed, can send commands to cluster-manager 110 that can create a new (and separate) version of the distributed application instances 170 to be created, and to initiate migration of all application data associated withinstance 120. - During an update, original application instance 122 is updated to and replaced by updated application instance 172. Likewise, original application instance 124 is updated to and replaced by an updated application instance 174. More generally, there does not have to be a 1:1 relationship between original instances and updated instances. For example, original instance 122 could be updated and the update cloned to yield plural updated instances. After the update, the number of instances could increase, decrease, or stay the same. However, since different original instances may be running on different systems, it may be most efficient to update each node by replacing the original instance that it was running with an updated instance.
- In the illustrated example, updated distributed application instances 170 include application instances 172 and 174. Updated application instance 172 resides in a
process container 176 of apod 178, while updated application instance 174 resides in aprocess container 180 of apod 182. Updated application instance 172 includes adata synchronizer 184, and updated application instance 174 includes a data synchronizer 186. However, upon launch of application instances 172 and 174, the respectiveapplication data sets - In the illustrated embodiment, a completely new a separate version of an application replaces its predecessor. In some alternative embodiments, updating the application instances can involve cloning their container images, apply updates to the clones, and then launching the updated clones. In many cases, once the updated application instances are all launched, production can be switched from the original instances to the updated instances.
- However, upon launch, updated distributed application instances 172 and 174 lack the application data collected and generated by the original distribution application instances 122 and 124. For this reason, the updated application instances 172 and 174 are not ready to handle production workloads upon launch. Therefore, the production workload cannot be switched in response to an indication that the updated distribution application instances 172 and 174 have been launched.
- Instead, original
distribution application instances 120 continue to handle production workloads while application data is transferred to the updated instances. For example, application instances 122 and 124 can synchronize with each other and the updated application instances until all four (original and updated) app data instances are synchronized. At that point In an alternative embodiment, the app data from the original distributed application instances can be checkpointed, e.g., their respective app data can be transferred to the respective updated application instances. In such a case, production workloads can be switched to the updated application instances on a staggered basis. - Once the data has been transferred, the updated application instances are ready to handle production workloads. However, cluster 102 prevents entities outside of the cluster (e.g., update script 160) from querying pods directly to determine when the data transfer is complete. It would be possible to provide external access to the pods, but this external access might pose a security risk.
- So, instead, update
script 160 authenticates itself to cluster-manager 110 and commands it, via an API of cluster manager, to create a temporary update-monitor pod 192 within cluster 102. The temporary update-monitor pod can include alightweight http client 194 that uses a command-line interface (CLI) to communicate with other pods of cluster 102. As thispod 192 is internal to the cluster 102, it can locate and communicate with other pods in the cluster. It can thus poll the original and updated application instances at a specified time to determine if the application data transfer is complete. The results of the polling can be entered into an update status log 196, which can be read byexternal script 160. If update status log 196 indicates that the transfer has been successfully completed, then production transactions can be switched to the updated instances. Thescript 160 can also command the cluster 110 pod 162 to destroy the original distributed application instances and the temporary update-monitor pod to free up resources and remove any security risk that it might represent if allowed to persist. - Instead of indicating a success, the update status log 196 can indicate that the transferred failed, e.g., one of the pods involved in the transfer indicates that it could not complete the transfer, or that there was a time-out, e.g., a status query was not answered. In either of these cases, the temporary pod can be destroyed and, if appropriate, the script can command cluster manager 110 to create a new temporary update-monitor pod. Because creation of the temporary pod requires authentication and because the lifetime of the temporary pod is limited, e.g., less than a minute, it poses little or no security risk.
- A
process 300 for updating a distributed application in a cluster is flow charted inFIG. 3 . At 301, an update script is launched outside the cluster. At 302, the script causes updated instances of the distributed application to be launched. In some cases, preparation for the launch can included cloning container images for the original instances, applying updates to the cloned images, and launching the updated cloned images. Once the launches have completed and the updated instances are up and running, e.g., in new pods within the cluster, the cluster can report, at 303, to the script that the new version has been successfully created and is running. At this point, post-launch initialization, e.g., data transfer from the original instances to the updated instances, can begin. - At 305, the script commands a cluster manager to create a temporary update-monitor pod. The temporary update-monitor pod includes a lightweight web (http) client that sends, at 306, http commands to the other pods in the cluster to initiate and monitor the status of the data transfer. In other words, the temporary update-monitor pod polls the other pods in the cluster. The HTTP responses are written to a log file at 307. The script waits for the update-monitor pod to complete and then reads the log at 308. In this way, the script can obtain information regarding the data transfer via the temporary update-monitor pod that it could not obtain directly on its own.
- In the event that the log file indicates that the data transfer is complete and that the update operation is a success, then, at 309, the script causes the production workflow to be switched from the original distributed application instances to the updated distributed application instances. Once the switch-over is complete, the script commands, at 310, the cluster manager to destroy the original pods (containing the original application instances) and the temporary update-monitor pod. This marks an end to update
process 300. - In the event that the log indicates that the updated failed, e.g., because some pod indicates that it cannot complete the transfer, or, in the event a time-out occurs in that the poll by the temporary pod goes unanswered, then, at 311, the script commands the cluster manager to destroy the temporary update-monitor pod and the resources associated with the new (failed) application version so that it does not persist.
Process 300 then can return to 305, to create a new temporary update-monitor pod. In this way, no temporary update-monitor pod can be too transitory to expose a security risk. For example, the timing can be such that each temporary update-monitor pod persists for less than a minute. - All art labeled “prior art”, if any, is admitted prior art; all art not labeled “prior art” is not admitted prior art. The illustrated embodiments, as well as variations thereon and modifications thereto, are provided for by the present invention, the scope of which is defined by the following claims.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/713,071 US10705880B2 (en) | 2017-09-22 | 2017-09-22 | Cluster updating using temporary update-monitor pod |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/713,071 US10705880B2 (en) | 2017-09-22 | 2017-09-22 | Cluster updating using temporary update-monitor pod |
Publications (2)
Publication Number | Publication Date |
---|---|
US20190095253A1 true US20190095253A1 (en) | 2019-03-28 |
US10705880B2 US10705880B2 (en) | 2020-07-07 |
Family
ID=65806600
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/713,071 Active 2038-02-16 US10705880B2 (en) | 2017-09-22 | 2017-09-22 | Cluster updating using temporary update-monitor pod |
Country Status (1)
Country | Link |
---|---|
US (1) | US10705880B2 (en) |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190190771A1 (en) * | 2017-12-20 | 2019-06-20 | Gemini Open Cloud Computing Inc. | Cloud service management method |
CN110262899A (en) * | 2019-06-20 | 2019-09-20 | 无锡华云数据技术服务有限公司 | Monitor component elastic telescopic method, apparatus and controlled terminal based on Kubernetes cluster |
CN110311817A (en) * | 2019-06-28 | 2019-10-08 | 四川长虹电器股份有限公司 | Container log processing system for Kubernetes cluster |
CN110347443A (en) * | 2019-06-28 | 2019-10-18 | 联想(北京)有限公司 | Log processing method and log processing device |
CN110557428A (en) * | 2019-07-17 | 2019-12-10 | 中国科学院计算技术研究所 | script interpretation type service agent method and system based on Kubernetes |
CN110704376A (en) * | 2019-09-04 | 2020-01-17 | 广东浪潮大数据研究有限公司 | Log file saving method and device |
CN110851236A (en) * | 2019-11-11 | 2020-02-28 | 星环信息科技(上海)有限公司 | Real-time resource scheduling method and device, computer equipment and storage medium |
CN110995871A (en) * | 2019-12-24 | 2020-04-10 | 浪潮云信息技术有限公司 | Method for realizing high availability of KV storage service |
CN111610985A (en) * | 2020-05-13 | 2020-09-01 | 麒麟软件有限公司 | Kubernet cluster rapid deployment method on domestic platform |
CN111897558A (en) * | 2020-07-23 | 2020-11-06 | 北京三快在线科技有限公司 | Container cluster management system Kubernetes upgrade method and device |
WO2021017301A1 (en) * | 2019-07-30 | 2021-02-04 | 平安科技(深圳)有限公司 | Management method and apparatus based on kubernetes cluster, and computer-readable storage medium |
CN112506617A (en) * | 2020-12-16 | 2021-03-16 | 新浪网技术(中国)有限公司 | Mirror image updating method and device for sidecar container in Kubernetes cluster |
US11086616B2 (en) * | 2018-09-25 | 2021-08-10 | Vmware, Inc. | Near zero downtime application upgrade |
CN113364640A (en) * | 2020-03-04 | 2021-09-07 | 大唐移动通信设备有限公司 | Visualization method and device for operation index |
CN113422700A (en) * | 2021-06-22 | 2021-09-21 | 汇付天下有限公司 | Non-inductive upgrading method and non-inductive upgrading device |
US11176245B2 (en) | 2019-09-30 | 2021-11-16 | International Business Machines Corporation | Protecting workloads in Kubernetes |
WO2022007645A1 (en) * | 2020-07-10 | 2022-01-13 | 华为技术有限公司 | Method and apparatus for creating pod |
US11226845B2 (en) * | 2020-02-13 | 2022-01-18 | International Business Machines Corporation | Enhanced healing and scalability of cloud environment app instances through continuous instance regeneration |
US20220092190A1 (en) * | 2020-09-18 | 2022-03-24 | Checkpoint Software Technologies Ltd. | System and method for performing automated security reviews |
CN114884838A (en) * | 2022-05-20 | 2022-08-09 | 远景智能国际私人投资有限公司 | Monitoring method of Kubernetes component and server |
US11451430B2 (en) * | 2018-06-06 | 2022-09-20 | Huawei Cloud Computing Technologies Co., Ltd. | System and method to schedule management operations and shared memory space for multi-tenant cache service in cloud |
US11513842B2 (en) | 2019-10-03 | 2022-11-29 | International Business Machines Corporation | Performance biased resource scheduling based on runtime performance |
US11558253B2 (en) * | 2018-09-12 | 2023-01-17 | Huawei Technologies Co., Ltd. | Data processing method and apparatus, and computing node for updating container images |
CN115643112A (en) * | 2022-12-22 | 2023-01-24 | 杭州默安科技有限公司 | Method and device for testing safety protection capability |
US11586455B2 (en) * | 2019-02-21 | 2023-02-21 | Red Hat, Inc. | Managing containers across multiple operating systems |
US11816469B2 (en) | 2021-09-22 | 2023-11-14 | International Business Machines Corporation | Resolving the version mismatch problem when implementing a rolling update in an open-source platform for container orchestration |
US20240012717A1 (en) * | 2022-07-11 | 2024-01-11 | Commvault Systems, Inc. | Protecting configuration data in a clustered container system |
CN117407125A (en) * | 2023-12-14 | 2024-01-16 | 中电云计算技术有限公司 | Pod high availability implementation method, device, equipment and readable storage medium |
US20240103921A1 (en) * | 2022-09-22 | 2024-03-28 | Honeywell International Inc. | Systems and methods for secured and integrated analytics deployment accelerator |
US12032855B2 (en) | 2021-08-06 | 2024-07-09 | Commvault Systems, Inc. | Using an application orchestrator computing environment for automatically scaled deployment of data protection resources needed for data in a production cluster distinct from the application orchestrator or in another application orchestrator computing environment |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11750475B1 (en) * | 2019-01-15 | 2023-09-05 | Amazon Technologies, Inc. | Monitoring customer application status in a provider network |
US11106548B2 (en) * | 2019-10-15 | 2021-08-31 | EMC IP Holding Company LLC | Dynamic application consistent data restoration |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050177827A1 (en) * | 2000-03-24 | 2005-08-11 | Fong Kester L. | Method of administering software components using asynchronous messaging in a multi-platform, multi-programming language environment |
US9112769B1 (en) * | 2010-12-27 | 2015-08-18 | Amazon Technologies, Inc. | Programatically provisioning virtual networks |
US20160043892A1 (en) * | 2014-07-22 | 2016-02-11 | Intigua, Inc. | System and method for cloud based provisioning, configuring, and operating management tools |
US20160366233A1 (en) * | 2015-06-10 | 2016-12-15 | Platform9, Inc. | Private Cloud as a service |
US20170111241A1 (en) * | 2015-10-19 | 2017-04-20 | Draios Inc. | Automated service-oriented performance management |
US9760529B1 (en) * | 2014-09-17 | 2017-09-12 | Amazon Technologies, Inc. | Distributed state manager bootstrapping |
US20170262221A1 (en) * | 2016-03-11 | 2017-09-14 | EMC IP Holding Company LLC | Methods and apparatuses for data migration of a storage device |
US9928059B1 (en) * | 2014-12-19 | 2018-03-27 | Amazon Technologies, Inc. | Automated deployment of a multi-version application in a network-based computing environment |
US20180109387A1 (en) * | 2016-10-18 | 2018-04-19 | Red Hat, Inc. | Continued verification and monitor of application code in containerized execution environment |
US20180152534A1 (en) * | 2015-06-03 | 2018-05-31 | Telefonaktiebolaget Lm Ericsson (Publ) | Implanted agent within a first service container for enabling a reverse proxy on a second container |
US20180288129A1 (en) * | 2017-03-29 | 2018-10-04 | Ca, Inc. | Introspection driven monitoring of multi-container applications |
US20180359338A1 (en) * | 2017-06-09 | 2018-12-13 | Red Hat, Inc. | Data driven bin packing implementation for data centers with variable node capabilities |
US20190095293A1 (en) * | 2016-07-27 | 2019-03-28 | Tencent Technology (Shenzhen) Company Limited | Data disaster recovery method, device and system |
-
2017
- 2017-09-22 US US15/713,071 patent/US10705880B2/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050177827A1 (en) * | 2000-03-24 | 2005-08-11 | Fong Kester L. | Method of administering software components using asynchronous messaging in a multi-platform, multi-programming language environment |
US9112769B1 (en) * | 2010-12-27 | 2015-08-18 | Amazon Technologies, Inc. | Programatically provisioning virtual networks |
US20160043892A1 (en) * | 2014-07-22 | 2016-02-11 | Intigua, Inc. | System and method for cloud based provisioning, configuring, and operating management tools |
US9760529B1 (en) * | 2014-09-17 | 2017-09-12 | Amazon Technologies, Inc. | Distributed state manager bootstrapping |
US9928059B1 (en) * | 2014-12-19 | 2018-03-27 | Amazon Technologies, Inc. | Automated deployment of a multi-version application in a network-based computing environment |
US20180152534A1 (en) * | 2015-06-03 | 2018-05-31 | Telefonaktiebolaget Lm Ericsson (Publ) | Implanted agent within a first service container for enabling a reverse proxy on a second container |
US20160366233A1 (en) * | 2015-06-10 | 2016-12-15 | Platform9, Inc. | Private Cloud as a service |
US20170111241A1 (en) * | 2015-10-19 | 2017-04-20 | Draios Inc. | Automated service-oriented performance management |
US20170262221A1 (en) * | 2016-03-11 | 2017-09-14 | EMC IP Holding Company LLC | Methods and apparatuses for data migration of a storage device |
US20190095293A1 (en) * | 2016-07-27 | 2019-03-28 | Tencent Technology (Shenzhen) Company Limited | Data disaster recovery method, device and system |
US20180109387A1 (en) * | 2016-10-18 | 2018-04-19 | Red Hat, Inc. | Continued verification and monitor of application code in containerized execution environment |
US20180288129A1 (en) * | 2017-03-29 | 2018-10-04 | Ca, Inc. | Introspection driven monitoring of multi-container applications |
US20180359338A1 (en) * | 2017-06-09 | 2018-12-13 | Red Hat, Inc. | Data driven bin packing implementation for data centers with variable node capabilities |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190190771A1 (en) * | 2017-12-20 | 2019-06-20 | Gemini Open Cloud Computing Inc. | Cloud service management method |
US11451430B2 (en) * | 2018-06-06 | 2022-09-20 | Huawei Cloud Computing Technologies Co., Ltd. | System and method to schedule management operations and shared memory space for multi-tenant cache service in cloud |
US11558253B2 (en) * | 2018-09-12 | 2023-01-17 | Huawei Technologies Co., Ltd. | Data processing method and apparatus, and computing node for updating container images |
US11086616B2 (en) * | 2018-09-25 | 2021-08-10 | Vmware, Inc. | Near zero downtime application upgrade |
US11586455B2 (en) * | 2019-02-21 | 2023-02-21 | Red Hat, Inc. | Managing containers across multiple operating systems |
CN110262899B (en) * | 2019-06-20 | 2021-05-11 | 华云数据控股集团有限公司 | Monitoring component elastic expansion method and device based on Kubernetes cluster and controlled terminal |
CN110262899A (en) * | 2019-06-20 | 2019-09-20 | 无锡华云数据技术服务有限公司 | Monitor component elastic telescopic method, apparatus and controlled terminal based on Kubernetes cluster |
CN110347443A (en) * | 2019-06-28 | 2019-10-18 | 联想(北京)有限公司 | Log processing method and log processing device |
CN110311817A (en) * | 2019-06-28 | 2019-10-08 | 四川长虹电器股份有限公司 | Container log processing system for Kubernetes cluster |
CN110557428A (en) * | 2019-07-17 | 2019-12-10 | 中国科学院计算技术研究所 | script interpretation type service agent method and system based on Kubernetes |
WO2021017301A1 (en) * | 2019-07-30 | 2021-02-04 | 平安科技(深圳)有限公司 | Management method and apparatus based on kubernetes cluster, and computer-readable storage medium |
CN110704376A (en) * | 2019-09-04 | 2020-01-17 | 广东浪潮大数据研究有限公司 | Log file saving method and device |
US11176245B2 (en) | 2019-09-30 | 2021-11-16 | International Business Machines Corporation | Protecting workloads in Kubernetes |
US11513842B2 (en) | 2019-10-03 | 2022-11-29 | International Business Machines Corporation | Performance biased resource scheduling based on runtime performance |
CN110851236A (en) * | 2019-11-11 | 2020-02-28 | 星环信息科技(上海)有限公司 | Real-time resource scheduling method and device, computer equipment and storage medium |
WO2021093783A1 (en) * | 2019-11-11 | 2021-05-20 | 星环信息科技(上海)股份有限公司 | Real-time resource scheduling method and apparatus, computer device, and storage medium |
CN110995871A (en) * | 2019-12-24 | 2020-04-10 | 浪潮云信息技术有限公司 | Method for realizing high availability of KV storage service |
US11226845B2 (en) * | 2020-02-13 | 2022-01-18 | International Business Machines Corporation | Enhanced healing and scalability of cloud environment app instances through continuous instance regeneration |
CN113364640A (en) * | 2020-03-04 | 2021-09-07 | 大唐移动通信设备有限公司 | Visualization method and device for operation index |
CN111610985A (en) * | 2020-05-13 | 2020-09-01 | 麒麟软件有限公司 | Kubernet cluster rapid deployment method on domestic platform |
WO2022007645A1 (en) * | 2020-07-10 | 2022-01-13 | 华为技术有限公司 | Method and apparatus for creating pod |
CN111897558A (en) * | 2020-07-23 | 2020-11-06 | 北京三快在线科技有限公司 | Container cluster management system Kubernetes upgrade method and device |
US11797685B2 (en) * | 2020-09-18 | 2023-10-24 | Check Point Software Technologies Ltd. | System and method for performing automated security reviews |
US20220092190A1 (en) * | 2020-09-18 | 2022-03-24 | Checkpoint Software Technologies Ltd. | System and method for performing automated security reviews |
CN112506617A (en) * | 2020-12-16 | 2021-03-16 | 新浪网技术(中国)有限公司 | Mirror image updating method and device for sidecar container in Kubernetes cluster |
CN113422700A (en) * | 2021-06-22 | 2021-09-21 | 汇付天下有限公司 | Non-inductive upgrading method and non-inductive upgrading device |
US12032855B2 (en) | 2021-08-06 | 2024-07-09 | Commvault Systems, Inc. | Using an application orchestrator computing environment for automatically scaled deployment of data protection resources needed for data in a production cluster distinct from the application orchestrator or in another application orchestrator computing environment |
US11816469B2 (en) | 2021-09-22 | 2023-11-14 | International Business Machines Corporation | Resolving the version mismatch problem when implementing a rolling update in an open-source platform for container orchestration |
CN114884838A (en) * | 2022-05-20 | 2022-08-09 | 远景智能国际私人投资有限公司 | Monitoring method of Kubernetes component and server |
US20240012717A1 (en) * | 2022-07-11 | 2024-01-11 | Commvault Systems, Inc. | Protecting configuration data in a clustered container system |
US12135618B2 (en) * | 2022-07-11 | 2024-11-05 | Commvault Systems, Inc. | Protecting configuration data in a clustered container system |
US20240103921A1 (en) * | 2022-09-22 | 2024-03-28 | Honeywell International Inc. | Systems and methods for secured and integrated analytics deployment accelerator |
CN115643112A (en) * | 2022-12-22 | 2023-01-24 | 杭州默安科技有限公司 | Method and device for testing safety protection capability |
CN117407125A (en) * | 2023-12-14 | 2024-01-16 | 中电云计算技术有限公司 | Pod high availability implementation method, device, equipment and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
US10705880B2 (en) | 2020-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10705880B2 (en) | Cluster updating using temporary update-monitor pod | |
US11288130B2 (en) | Container-based application data protection method and system | |
US11416342B2 (en) | Automatically configuring boot sequence of container systems for disaster recovery | |
US10908999B2 (en) | Network block device based continuous replication for Kubernetes container management systems | |
US11372729B2 (en) | In-place cloud instance restore | |
US10230786B2 (en) | Hot deployment in a distributed cluster system | |
EP3183649B1 (en) | Application publishing using memory state sharing | |
US10713183B2 (en) | Virtual machine backup using snapshots and current configuration | |
US9386079B2 (en) | Method and system of virtual desktop infrastructure deployment studio | |
EP3588296A1 (en) | Dynamically scaled hyperconverged system | |
US8738883B2 (en) | Snapshot creation from block lists | |
US11809901B2 (en) | Migrating the runtime state of a container between two nodes | |
CN111989681A (en) | Automatically deployed Information Technology (IT) system and method | |
US20150244802A1 (en) | Importing and exporting virtual disk images | |
EP3767471A1 (en) | Provisioning and managing replicated data instances | |
US20110314465A1 (en) | Method and system for workload distributing and processing across a network of replicated virtual machines | |
US12267253B2 (en) | Data plane techniques for substrate managed containers | |
US20120311377A1 (en) | Replaying jobs at a secondary location of a service | |
US10860364B2 (en) | Containerized management services with high availability | |
US20200364063A1 (en) | Distributed job manager for stateful microservices | |
CN106354563A (en) | Distributed computing system for 3D (three-dimensional reconstruction) and 3D reconstruction method | |
US9430265B1 (en) | System and method for handling I/O timeout deadlines in virtualized systems | |
Carson et al. | Mandrake: Implementing durability for edge clouds | |
US10848405B2 (en) | Reporting progress of operation executing on unreachable host | |
CN116112497B (en) | Node scheduling method, device, equipment and medium of cloud host cluster |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: VMWARE, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CURTIS, TYLER;VIJAYAKUMAR, KARTHIGEYAN;REEL/FRAME:043667/0379 Effective date: 20170918 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
AS | Assignment |
Owner name: VMWARE LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:VMWARE, INC.;REEL/FRAME:067102/0395 Effective date: 20231121 |