Posts

Showing posts from December, 2020

Cron job vs Batch job

 https://www.quora.com/What-is-the-difference-between-cron-job-and-batch-job#:~:text=Cron%20is%20a%20task%20scheduler,a%20specified%20time%20or%20frequency.&text=A%20cron%20job%20runs%20regularly,batch%20job%20that%20runs%20regularly.

Google Cloud CDN vs Google Cloud Storage

 https://stackshare.io/stackups/google-cloud-cdn-vs-google-cloud-storage#:~:text=Cloud%20CDN%20lowers%20network%20latency,highly%20available%20object%20storage%20service%22.

Cloud Composer

 Cloud Composer is managed workflow orchestration so different processing are often related to each other. Google doc

Shared VPC overview

Shared VPC overview 

Google Cloud Platform Market

 Google cloud platform market lets you quickly deploy functional software packages that run on Google cloud platform. Even if you are unfamiliar with services like Compute Engine or Cloud Storage you can easily start up a familiar software package without having to manually configure the software, virtual machine instances, cloud, or network settings. Deploy a software package now and scale that deployment letter when your application requires additional capacity. Google cloud platform updates the images for the software packages to fix critical issues and vulnerabilities, but doesn't update software that you have already deployed.

Google Cloud Platform Deployment Manager

 Google Cloud Deployment Manager is an infrastructure deployment service that automates the creation and management of Google cloud platform resources. Write flexible template and configuration files and use them to create deployments that have a variety of cloud platform services such as Google Cloud Storage, Google Compute Engine, and Google Cloud SQL, configured to work together.

What is Storage Area Network (SAN) and Network Attached Storage (NAS)

 Storage Area Network is dedicated high speed network that provide access to block level storage. SANs were adopted to improve application availability and performance by segregating storage traffic from rest of the LAN. SAN and  Network Attached Storage (NAS) are both network based storage solution.  SAN typically uses fibre channel connectivity, while NAS typically ties into to a network through a standard Ethernet connection.  A SAN stores data at the  block level while NAS access data as files. To a client OS, a SAN typically appears as a disk and exists as it's own separate network of storage devices while NAS appears as a file server.

gsutil Vs Storage Transfer Service Vs Transfer Appliance

When transferring data from an on-premiss location, use gsutil When transferring data from another cloud storage provider, use storage transfer service gsutil is a Python application that lets you access Cloud Storage from the command line. You can use gsutil to do a wide range of bucket and object management tasks, including: Creating and deleting buckets. Uploading, downloading, and deleting objects. Listing buckets and objects. Moving, copying, and renaming objects. Editing object and bucket ACLs. gsutil performs all operations, including uploads and downloads, using HTTPS and transport-layer security (TLS). For a complete list of guides to completing tasks with gsutil, see  Cloud Storage How-to Guides . Storage Transfer Service is a product that enables you to: Move or backup data to a Cloud Storage bucket either from other cloud storage providers or from your  on-premises  storage. Move data from one Cloud Storage bucket to another, so that it is available to different groups of u

What is Google Cloud Endpoints

  Endpoints is a distributed API management system. It provides an API console, hosting, logging, monitoring, and other features to help you create, share, maintain, and secure your APIs. Endpoints is available for use with the distributed Extensible Service Proxy (ESP) or the Extensible Service Proxy V2 (ESPv2). Each proxy provides support to the platforms described below: App Engine flexible (ESP only) Google Kubernetes Engine (ESP or ESPv2) Compute Engine (ESP or ESPv2) Kubernetes (ESP or ESPv2) App Engine standard (ESPv2 only) Cloud Functions (ESPv2 only) Cloud Run (ESPv2 only) Cloud Run for Anthos (ESPv2 only)

Penetration Testing

 A penetration test, also known as a pen test, is a simulated cyber attack against your computer system to check for exploitable vulnerabilities. In the context of web application security, penetration testing is commonly used to augment a web application firewall (WAF). Pen testing can involve the attempted breaching of any number of application systems, (e.g., application protocol interfaces (APIs), frontend/backend servers) to uncover vulnerabilities, such as unsanitized inputs that are susceptible to code injection attacks. Insights provided by the penetration test can be used to fine-tune your WAF security policies and patch detected vulnerabilities.

Cloud Data Loss Prevention (DLP) API

 Fully managed service designed to help you discover, classify, and protect your most sensitive data. Take charge of your data on or off cloud  Inspect your data to gain valuable insights and make informed decisions to secure your data  Effectively reduce data risk with de-identification methods like masking and tokenization  Seamlessly inspect and transform structured and unstructured data BENEFITS  Gain visibility into the data you store and process  Create dashboards and audit reports. Automate tagging, remediation, or policy based on findings. Connect DLP results into Security Command Center, Data Catalog, or export to your own SIEM or governance tool.  Configure data inspection and monitoring with ease  Schedule inspection jobs directly in the console UI or stream data into our API to inspect or protect workloads on Google Cloud, on-premises, mobile applications, or other cloud service providers.  Reduce risk to unlock more data for your business  Protection of sensitive data, lik

What are service accounts?

  A service account is a special kind of account used by an application or a   virtual machine (VM) instance , not a person. Applications use service accounts to make   authorized API calls , authorized as either the service account itself, or as Google Workspace or Cloud Identity users through   domain-wide delegation . For example, a Compute Engine VM may run as a service account, and that account can be given permissions to access the resources it needs. This way the service account is the identity of the service, and the service account's permissions control which resources the service can access. A service account is identified by its email address, which is unique to the account. Differences between a service account and a user account Service accounts differ from user accounts in a few key ways: Service accounts do not have passwords, and cannot log in via browsers or cookies. Service accounts are associated with private/public RSA key-pairs that are used for authentication

App Engine Standard Environment Vs App Engine Flexible environment

  Feature Standard environment Flexible environment Instance startup time Seconds Minutes Maximum request timeout Depends on the  runtime and type of scaling. 60 minutes Background threads Yes, with restrictions Yes Background processes No Yes SSH debugging No Yes Scaling Manual, Basic, Automatic Manual, Automatic Scale to zero Yes No, minimum 1 instance Writing to local disk Java 8, Java 11, Node.js, Python 3, PHP 7, Ruby, Go 1.11, and Go 1.12+ have read and write access to the  /tmp  directory. Python 2.7 and PHP 5.5 don't have write access to the disk. Yes, ephemeral (disk initialized on each VM startup) Modifying the runtime No Yes (through Dockerfile) Deployment time Seconds Minutes Automatic in-place security patches Yes Yes (excludes container image runtime) Access to Google Cloud APIs & Services such as  Cloud Storage ,  Cloud SQL ,  Memorystore ,  Tasks  and others. Yes Yes WebSockets No Java 8, Python 2, and PHP 5 provide a proprietary Sockets API (beta), but the API

Google Cloud Legacy networks

  Legacy networks

Google Cloud VPC network overview

  VPC network overview

What is Data rehydration

  Once you capture your data onto the Transfer Appliance,   ship the appliance   to the Google upload facility for rehydration. Data rehydration is the process by which you fully reconstitute the files so you can access and use the transferred data. To rehydrate data, the data is first copied from the Transfer Appliance to your Cloud Storage staging location. The data uploaded to your staging location is still compressed, deduplicated and encrypted. Data rehydration reverses this process and restores your data to a usable state. As the data is rehydrated, it is moved to the Cloud Storage destination bucket that you created. To perform data rehydration, use a Rehydrator instance, which is a virtual appliance that runs as a Compute Engine instance on Google Cloud Platform. The Transfer Appliance Rehydrator compares the CRC32C hash value of each file being rehydrated with the hash value computed when the file was captured. If the checksums don't match, the file is skipped and appears

Difference among Cloud Armour vs VPC Service Control vs Cloud Data Loss Prevention

 Google Cloud Armour  Delivers defence at scale against infrastructure and application Distributed Denial of Service (DDoS) attacks using Google's global infrastructures and security system but it doesn't help in data redaction. VPC Service Control  Allows user to define security perimeter around GCP resources such as Cloud Storage buckets, Bigtable instances and BigQuery dataset to constrain data within a VPC and help mitigate data exfiltration risk but it doesn't help in data redaction. Cloud Data Loss Prevention (DLP) Help you better understand and manage sensitive data. It provides fast, scalable, classification and redaction for sensitive data elements like credit card numbers, names, social security numbers, US and selected international identifier numbers, phone numbers and GCP credentials. 

Cloud Data Loss Prevention

Fully managed service designed to help you discover, classify, and protect your most sensitive data.  Take charge of your data on or off cloud  Inspect your data to gain valuable insights and make informed decisions to secure your data  Effectively reduce data risk with de-identification methods like masking and tokenization  Seamlessly inspect and transform structured and unstructured data BENEFITS  Gain visibility into the data you store and process  Create dashboards and audit reports. Automate tagging, remediation, or policy based on findings. Connect DLP results into Security Command Center, Data Catalog, or export to your own SIEM or governance tool.  Configure data inspection and monitoring with ease  Schedule inspection jobs directly in the console UI or stream data into our API to inspect or protect workloads on Google Cloud, on-premises, mobile applications, or other cloud service providers.  Reduce risk to unlock more data for your business  Protection of sensitive data, lik

VPC Service Controls

  Isolate resources of multi-tenant Google Cloud services to mitigate data exfiltration risks. Doc BENEFITS  Mitigate data exfiltration risks  Enforce a security perimeter with VPC Service Controls to isolate resources of multi-tenant Google Cloud services—reducing the risk of data exfiltration or data breach.  Keep data private inside the VPC  Configure private communication between cloud resources from VPC networks spanning cloud and on-premises hybrid deployments. Take advantage of fully managed tools like Cloud Storage, Bigtable, and BigQuery.  Deliver independent data access controls  VPC Service Controls delivers an extra layer of control with a defense-in-depth approach for multi-tenant services that helps protect service access from both insider and outsider threats.  KEY FEATURES  Centrally manage multi-tenant service access at scale  With VPC Service Controls, enterprise security teams can define fine-grained perimeter controls and enforce that security posture across numerou

Secret Manager conceptual overview

  Secrets management and key management Secret Manager allows you to store, manage, and access  secrets  as binary blobs or text strings. With the appropriate permissions, you can view the contents of the secret. Secret Manager works well for storing configuration information such as database passwords, API keys, or TLS certificates needed by an application at runtime. A key management system, such as  Cloud KMS , allows you to manage cryptographic keys and to use them to encrypt or decrypt data. However, you cannot view, extract, or export the key material itself. Similarly, you can use a key management system to encrypt sensitive data before transmitting it or storing it. You can then decrypt the sensitive data before using it. Using a key management system to protect a secret in this way is more complex and less efficient than using Secret Manager. Cloud KMS is designed to handle large encryption workloads, such as encrypting rows in a database or encrypting binary data such as imag

Google Cloud Armor

Help protect your applications and websites against denial of service and web attacks. Benefit from DDoS protection and WAF at Google scale Detect and mitigate attacks against your  Cloud Load Balancing  workloads Mitigate OWASP Top 10 risks and help protect workloads on-premises or in the cloud BENEFITS  Enterprise-grade DDoS defense  Cloud Armor benefits from our experience of protecting key internet properties such as Google Search, Gmail, and YouTube. It provides built-in defenses against L3 and L4 DDoS attacks.  Mitigate OWASP Top 10 risks  Cloud Armor provides predefined rules to help defend against attacks such as cross-site scripting (XSS) and SQL injection (SQLi) attacks.  Managed Protection  With Cloud Armor Managed Protection Plus tier, you will get access to DDoS and WAF services, curated rule sets, and other services for a predictable monthly price. Key features  IP-based and geo-based access control  Filter your incoming traffic based on IPv4 and IPv6 addresses or CIDRs.

What is the difference between Google Cloud Dataflow and Google Cloud Dataproc?

Image
  Here are three main points to consider while trying to choose between Dataproc and Dataflow Provisioning Dataproc - Manual provisioning of clusters Dataflow - Serverless. Automatic provisioning of clusters Hadoop Dependencies Dataproc should be used if the processing has any dependencies to tools in the Hadoop ecosystem. Portability Dataflow/Beam provides a clear separation between processing logic and the underlying execution engine. This helps with portability across different execution engines that support the Beam runtime, i.e. the same pipeline code can run seamlessly on either Dataflow, Spark or Flink.