Thursday, December 13, 2018

H12-722 HCNA-Storage (Huawei Certified Network Associate -Storage)

1. Huawei H13-611-ENU HCNA-Storage Certification Exam
This article introduces the H13-611-ENU HCNA-Storage exam outline, the other exam outline can be obtained in related training materials or Huawei Online Learning Website: http://support.huawei.com/learning.

Certification Exam Code Exam Title Duration Pass Score/
Total Score
HCNA-Storage H13-611-ENU HCNA-Storage
(Huawei Certified Network Associate -Storage) 90 min 600/1000

2. H13-611-ENU HCNA-Storage Exam Outline
2.1 Exam Content
The HCNA-Storage exam covers storage basical technologies. It includes but is not limited to the following parts: latest storage technologies and trends, storage technologies for AI, big data and cloud, introduction to storage ecosystems, business continuity solutions and tts applications, operation and management of storage system in datacenters.

2.2 Knowledge points
1)Latest Storage Technologies and Trends.
2)Storage Technologies for AI, Big Data and Cloud.
3)Introduction to Storage Ecosystems.
4)Business Continuity Solutions and Its Applications.
5)Operation and Management of Storage System in Datacenters.

Note:
The content mentioned in this article is just a general exam guide; the exam may also contain more related content that is not mentioned here.

2.3 Reference

Huawei HCNA-Storage Training Courses
Huawei OceanStor V3/V5 (2200/2600/5300/5500/5600/5800/6800) Product Training Course
Huawei OceanStor V3/V5 (2200/2600/5300/5500/5600/5800/6800) Product Documents

2.4 Recommended Training
HCNA-Storage V4.0 Training


Click here to view complete Q&A of H12-722 exam
Certkingdom Review
, Certkingdom PDF Torrents

MCTS Training, MCITP Trainnig
Best Huawei H12-722 Certification, Huawei H12-722 Training at certkingdom.com

Wednesday, December 12, 2018

Exam AZ-301 Microsoft Azure Architect Design (beta)

Languages: English
Audiences: IT professionals
Technology: Microsoft Azure

Skills measured
This exam measures your ability to accomplish the technical tasks listed below. The percentages indicate the relative weight of each major topic area on the exam. The higher the percentage, the more questions you are likely to see on that content area on the exam. View video tutorials about the variety of question types on Microsoft exams.

Do you have feedback about the relevance of the skills measured on this exam? Please send Microsoft your comments. All feedback will be reviewed and incorporated as appropriate while still maintaining the validity and reliability of the certification process. Note that Microsoft will not respond directly to your feedback. We appreciate your input in ensuring the quality of the Microsoft Certification program.

If you have concerns about specific questions on this exam, please submit an exam challenge.

If you have other questions or feedback about Microsoft Certification exams or about the certification program, registration, or promotions, please contact your Regional Service Center.

Determine Workload Requirements (10-15%)
Gather Information and Requirements
May include but not limited to: Identify compliance requirements, identity and access management infrastructure, and service-oriented architectures (e.g., integration patterns, service design, service discoverability); identify accessibility (e.g. Web Content Accessibility Guidelines), availability (e.g. Service Level Agreement), capacity planning and scalability, deploy-ability (e.g., repositories, failback, slot-based deployment), configurability, governance, maintainability (e.g. logging, debugging, troubleshooting, recovery, training), security (e.g. authentication, authorization, attacks), and sizing (e.g. support costs, optimization) requirements; recommend changes during project execution (ongoing); evaluate products and services to align with solution; create testing scenarios
Optimize Consumption Strategy
May include but not limited to: Optimize app service, compute, identity, network, and storage costs
Design an Auditing and Monitoring Strategy
May include but not limited to: Define logical groupings (tags) for resources to be monitored; determine levels and storage locations for logs; plan for integration with monitoring tools; recommend appropriate monitoring tool(s) for a solution; specify mechanism for event routing and escalation; design auditing for compliance requirements; design auditing policies and traceability requirements

Design for Identity and Security (20-25%)
Design Identity Management
May include but not limited to: Choose an identity management approach; design an identity delegation strategy, identity repository (including directory, application, systems, etc.); design self-service identity management and user and persona provisioning; define personas and roles; recommend appropriate access control strategy (e.g., attribute-based, discretionary access, history-based, identity-based, mandatory, organization-based, role-based, rule-based, responsibility-based)
Design Authentication
May include but not limited to: Choose an authentication approach; design a single-sign on approach; design for IPSec, logon, multi-factor, network access, and remote authentication
Design Authorization
May include but not limited to: Choose an authorization approach; define access permissions and privileges; design secure delegated access (e.g., oAuth, OpenID, etc.); recommend when and how to use API Keys.
Design for Risk Prevention for Identity
May include but not limited to: Design a risk assessment strategy (e.g., access reviews, RBAC policies, physical access); evaluate agreements involving services or products from vendors and contractors; update solution design to address and mitigate changes to existing security policies, standards, guidelines and procedures
Design a Monitoring Strategy for Identity and Security
May include but not limited to: Design for alert notifications; design an alert and metrics strategy; recommend authentication monitors

Design a Data Platform Solution (15-20%)
Design a Data Management Strategy
May include but not limited to: Choose between managed and unmanaged data store; choose between relational and non-relational databases; design data auditing and caching strategies; identify data attributes (e.g., relevancy, structure, frequency, size, durability, etc.); recommend Database Transaction Unit (DTU) sizing; design a data retention policy; design for data availability, consistency, and durability; design a data warehouse strategy
Design a Data Protection Strategy
May include but not limited to: Recommend geographic data storage; design an encryption strategy for data at rest, for data in transmission, and for data in use; design a scalability strategy for data; design secure access to data; design a data loss prevention (DLP) policy
Design and Document Data Flows
May include but not limited to: Identify data flow requirements; create a data flow diagram; design a data flow to meet business requirements; design a data import and export strategy
Design a Monitoring Strategy for the Data Platform
May include but not limited to: Design for alert notifications; design an alert and metrics strategy

Design a Business Continuity Strategy (15-20%)
Design a Site Recovery Strategy
May include but not limited to: Design a recovery solution; design a site recovery replication policy; design for site recovery capacity and for storage replication; design site failover and failback (planned/unplanned); design the site recovery network; recommend recovery objectives (e.g., Azure, on-prem, hybrid, Recovery Time Objective (RTO), Recovery Level Objective (RLO), Recovery Point Objective (RPO)); identify resources that require site recovery; identify supported and unsupported workloads; recommend a geographical distribution strategy
Design for High Availability
May include but not limited to: Design for application redundancy, autoscaling, data center and fault domain redundancy, and network redundancy; identify resources that require high availability; identify storage types for high availability
Design a disaster recovery strategy for individual workloads
May include but not limited to: Design failover/failback scenario(s); document recovery requirements; identify resources that require backup; recommend a geographic availability strategy
Design a Data Archiving Strategy
May include but not limited to: Recommend storage types and methodology for data archiving; identify requirements for data archiving and business compliance requirements for data archiving; identify SLA(s) for data archiving

Design for Deployment, Migration, and Integration (10-15%)
Design Deployments
May include but not limited to: Design a compute, container, data platform, messaging solution, storage, and web app and service deployment strategy
Design Migrations
May include but not limited to: Recommend a migration strategy; design data import/export strategies during migration; determine the appropriate application migration, data transfer, and network connectivity method; determine migration scope, including redundant, related, trivial, and outdated data; determine application and data compatibility
Design an API Integration Strategy
May include but not limited to: Design an API gateway strategy; determine policies for internal and external consumption of APIs; recommend a hosting structure for API management

Design an Infrastructure Strategy (15-20%)
Design a Storage Strategy
May include but not limited to: Design a storage provisioning strategy; design storage access strategy; identify storage requirements; recommend a storage solution and storage management tools
Design a Compute Strategy
May include but not limited to: Design compute provisioning and secure compute strategies; determine appropriate compute technologies (e.g., virtual machines, functions, service fabric, container instances, etc.); design an Azure HPC environment; identify compute requirements; recommend management tools for compute
Design a Networking Strategy
May include but not limited to: Design network provisioning and network security strategies; determine appropriate network connectivity technologies; identify networking requirements; recommend network management tools
Design a Monitoring Strategy for Infrastructure
May include but not limited to: Design for alert notifications; design an alert and metrics strategy

Preparation options
Instructor-led training

Who should take this exam?
Candidates for this exam are Azure Solution Architects who advise stakeholders and translates business requirements into secure, scalable, and reliable solutions.

Candidates should have advanced experience and knowledge across various aspects of IT operations, including networking, virtualization, identity, security, business continuity, disaster recovery, data management, budgeting, and governance. This role requires managing how decisions in each area affects an overall solution.

Candidates must be proficient in Azure administration, Azure development, and DevOps, and have expert-level skills in at least one of those domains.
QUESTION: 1
You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements:
• Provide access to the full .NET framework.
• Provide redundancy if an Azure region fails.
• Grant administrators access to the operating system to install custom application dependencies.
Solution: You deploy a web app in an Isolated App Service plan.
Does this meet the goal?

A. Yes
B. No

Answer: A

QUESTION: 2
You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements:
• Provide access to the full .NET framework.
• Provide redundancy if an Azure region fails.
• Grant administrators access to the operating system to install custom application dependencies.
Solution: You deploy a virtual machine scale set that uses autoscaling.
Does this meet the goal?

A. Yes
B. No

Answer: B

QUESTION: 3
You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements:
• Provide access to the full .NET framework.
• Provide redundancy if an Azure region fails.
• Grant administrators access to the operating system to install custom application dependencies.
Solution: You deploy an Azure virtual machine to two Azure regions, and you deploy an Azure
Application Gateway.
Does this meet the goal?

A. Yes
B. No

Answer: B

QUESTION: 4
You are designing an Azure solution for a company that wants to move a .NET Core web application
an on-premises data center to Azure. The web application relies on a Microsoft SQL Server 2016
database on Windows Server 2016. The database server will not move to Azure.
A separate networking team is responsible for configuring network permissions.
The company uses Azure ExpressRoute and has an ExpressRoute gateway connected to an Azure
virtual network named VNET1.
You need to recommend a solution for deploying the web application.
Solution: Deploy the web application to a web app hosted in a Premium App Service plan. Does this meet the goal?

A. Yes
B. No

Answer: A

QUESTION: 5
You are designing an Azure solution for a company that wants to move a .NET Core web application
an on-premises data center to Azure. The web application relies on a Microsoft SQL Server 2016
database on Windows Server 2016. The database server will not move to Azure.
A separate networking team is responsible for configuring network permissions.
The company uses Azure ExpressRoute and has an ExpressRoute gateway connected to an Azure
virtual network named VNET1.
You need to recommend a solution for deploying the web application.
Solution: Deploy the web application to a web app hosted in an Isolated App Service plan on VNET1.
Does this meet the goal?

A. Yes
B. No

Answer: B
MCTS Training, MCITP Trainnig

Thursday, April 26, 2018

1Z0-449 Oracle Big Data 2017 Implementation Essentials

Exam Number: 1Z0-449
Exam Title: Oracle Big Data 2017 Implementation Essentials

Associated Certification Paths

Passing this exam is required to earn these certifications. Select each certification title below to view full requirements.

Oracle Big Data 2017 Certification Implementation Specialist
Duration: 120 minutes
Number of Questions: 72
Passing Score: 67%

Beta exam score reports will be available in CertView approximately June 6, 2016. You will receive an email with instructions on how to access your beta exam results.

View passing score policy
Validated Against:
Exam has been validated against Oracle Big Data X4-2.
Format: Multiple Choice

Complete Recommended Training
Complete the training below to prepare for your exam (optional):
For Partners Only
Oracle Big Data 2016 Implementation Specialist
Oracle Big Data 2016 Implementation Boot Camp
OU Training
Oracle Big Data Fundamentals
Oracle NoSQL Database for Administrators
Additional Preparation and Information

A combination of Oracle training and hands-on experience (attained via labs and/or field experience) provides the best preparation for passing the exam.
Oracle Documentation
Oracle Big Data Documentation
Oracle NoSQL Documentation

Product tutorials
Big Data Learning Library

Datasheets and white papers
Oracle Big Data Resources and Whitepapers
Oracle NoSQL Enterprise Edition
Apache Flume User Guide


Big Data Technical Overview
Describe the architectural components of the Big Data Appliance
Describe how Big Data Appliance integrates with Exadata and Exalytics
Identify and architect the services that run on each node in the Big Data Appliance, as it expands from single to multiple nodes
Describe the Big Data Discovery and Big Data Spatial and Graph solutions
Explain the business drivers behind Big Data and NoSQL versus Hadoop

Core Hadoop
Explain the Hadoop Ecosystem
Implement the Hadoop Distributed File System
Identify the benefits of the Hadoop Distributed File System (HDFS)
Describe the architectural components of MapReduce
Describe the differences between MapReduce and YARN
Describe Hadoop High Availability
Describe the importance of Namenode, Datanode, JobTracker, TaskTracker in Hadoop
Use Flume in the Hadoop Distributed File System
Implement the data flow mechanism used in Flume

Oracle NoSQL Database
Use an Oracle NoSQL database
Describe the architectural components (Shard, Replica, Master) of the Oracle NoSQL database
Set up the KVStore
Use KVLite to test NoSQL applications
Integrate an Oracle NoSQL database with an Oracle database and Hadoop

Cloudera Enterprise Hadoop Distribution
Describe the Hive architecture
Set up Hive with formatters and SerDe
Implement the Oracle Table Access for a Hadoop Connector
Describe the Impala real-time query and explain how it differs from Hive
Create a database and table from a Hadoop Distributed File System file in Hive
Use Pig Latin to query data in HDFS
Execute a Hive query
Move data from a database to a Hadoop Distributed File System using Sqoop

Programming with R
Describe the Oracle R Advanced Analytics for a Hadoop connector
Use Oracle R Advanced Analytics for a Hadoop connector
Describe the architectural components of Oracle R Advanced Analytics for Hadoop
Implement an Oracle Database connection with Oracle R Enterprise

Oracle Loader for Hadoop
Explain the Oracle Loader for Hadoop
Configure the online and offline options for the Oracle Loader for Hadoop
Load Hadoop Distributed File System Data into an Oracle database

Oracle SQL Connector for Hadoop Distributed File System (HDFS)
Configure an external table for HDFS using the Oracle SQL Connector for Hadoop
Install the Oracle SQL Connector for Hadoop
Describe the Oracle SQL Connector for Hadoop Connector

Oracle Data Integrator (ODI) and Hadoop
Use ODI to transform data from Hive to Hive
Use ODI to move data from Hive to Oracle
Use ODI to move data from an Oracle database to a Hadoop Distributed File System using sqoop
Configure the Oracle Data Integrator with Application Adaptor for Hadoop to interact with Hadoop

Big Data SQL
Explain how Big Data SQL is used in a Big Data Appliance/Exadata architecture
Set up and configure Oracle Big Data SQL
Demonstrate Big Data SQL syntax used in create table statements
Access NoSQL and Hadoop data using a Big Data SQL query

Xquery for Hadoop Connector
Set up Oracle Xquery for Hadoop connector
Perform a simple Xquery using Oracle XQuery for Hadoop
Use Oracle Xquery with Hadoop-Hive to map an XML file into a Hive table

Securing Hadoop
Describe Oracle Big Data Appliance security and encryption features
Set up Kerberos security in Hadoop
Set up the Hadoop Distributed File System to use Access Control Lists
Set up Hive and Impala access security using Apache Sentry
Use LDAP and the Active directory for Hadoop access control

QUESTION 1
You need to place the results of a PigLatin script into an HDFS output directory.
What is the correct syntax in Apache Pig?

A. update hdfs set D as ‘./output’;
B. store D into ‘./output’;
C. place D into ‘./output’;
D. write D as ‘./output’;
E. hdfsstore D into ‘./output’;

Answer: B

Explanation:
Use the STORE operator to run (execute) Pig Latin statements and save (persist) results to the file system. Use STORE
for production scripts and batch mode processing.
Syntax: STORE alias INTO 'directory' [USING function];
Example: In this example data is stored using PigStorage and the asterisk character (*) as the field delimiter.
A = LOAD 'data' AS (a1:int,a2:int,a3:int);
DUMP A;
(1,2,3)
(4,2,1)
(8,3,4)
(4,3,3)
(7,2,5)
(8,4,3)
STORE A INTO 'myoutput' USING PigStorage ('*');
CAT myoutput;
1*2*3
4*2*1
8*3*4
4*3*3
7*2*5
8*4*3

QUESTION 2
The Hadoop NameNode is running on port #3001, the DataNode on port #4001, the KVStore agent on port #5001, and
the replication node on port #6001. All the services are running on localhost.
What is the valid syntax to create an external table in Hive and query data from the NoSQL Database?

A. CREATE EXTERNAL TABLE IF NOT EXISTSMOVIE( id INT,original_tit1e STRING,overview
STRING)STORED BY 'oracle.kv.hadoop.hive.table.TableStorageHandler'TBLPROPERTIES
("oracle.kv.kvstore"="kvscore","oracle.kv.hosts"="localhost:3001","oracle.kv.hadoop.hosts"
="localhost","oracle.kv.tableName"= MOVIE");

B. CREATE EXTERNAL TABLE IF NOT EXISTSMOVIE( id INT,original_title STRING,overview
STRING)STORED BY 'oracle.kv.hadoop.hive.table.TableStorageHandler'TBLPROPERTIES ("oracle.kv.kvstore "="
kvstore ","oracle.kv.hosts"="localhost:5001","oracle.kv.hadoop.hosts"="localhost","oracle.kv.tab1eN ame"="MOVIE");
C. CREATE EXTERNAL TABLE IF NOT EXISTSMOVIE( id INT,original_title STRING,overview
STRING)STORED BY
'oracle,kv.hadoop.hive.table.TableStorageHandler'TBLPROPERTIES
("oracle.kv.kvstore"="kvstore","oracle.kv.hosts"="localhost:4001","oracle.kv.hadoop.hosts"=
"localhost","oracle.kv.tab1eName"="MOVIE");

D. CREATE EXTERNAL TABLE IF NOT EXISTSMOVIE( id INT,original_title STRING,overview
STRING)STORED BY 'oracle,kv.hadoop.hive.table.TableStorageHandler'TBLPROPERTIES
("oracle.kv.kvstore"="kvstore","oracle.kv.hosts"="localhost:6001","oracle.kv.hadoop.hosts"=
"localhost","oracle.kv.tab1eName"="MOVIE");

Answer: C

Explanation:
The following is the basic syntax of a Hive CREATE TABLE statement for a Hive external table over an Oracle NoSQL table:
CREATE EXTERNAL TABLE tablename colname coltype[, colname coltype,...]
STORED BY 'oracle.kv.hadoop.hive.table.TableStorageHandler'
TBLPROPERTIES (
"oracle.kv.kvstore" = "database",
"oracle.kv.hosts" = "nosql_node1:port[, nosql_node2:port...]",
"oracle.kv.hadoop.hosts" = "hadoop_node1[,hadoop_node2...]",
"oracle.kv.tableName" = "table_name");
Where oracle.kv.hosts is a comma-delimited list of host names and port numbers in the
Oracle NoSQL Database cluster. Each string has the format hostname:port.
Enter multiple names to provide redundancy in the event that a host fails.

QUESTION 3
You need to create an architecture for your customer’s Oracle NoSQL KVStore. The customer needs to store clinical and non-clinical data together but only the clinical data is mission critical.
How can both types of data exist in the same KVStore?

A. Store the clinical data on the master node and the non-clinical data on the replica nodes.
B. Store the two types of data in separate partitions on highly available storage.
C. Store the two types of data in two separate KVStore units and create database aliases to mimic one KVStore.
D. Store the two types of data with differing consistency policies.

Answer: B

Explanation:
The KVStore is a collection of Storage Nodes which host a set of Replication Nodes. Data is spread across the Replication Nodes.
Each shard contains one or more partitions. Key-value pairs in the store are organized according to the key. Keys, in turn,
are assigned to a partition. Once a key is placed in a partition, it cannot be moved to a different partition. Oracle NoSQL
Database automatically assigns keys evenly across all the available partitions.
Note: At a very high level, a Replication Node can be thought of as a single database which contains key-value pairs.
Replication Nodes are organized into shards. A shard contains a single Replication Node which is responsible for
performing database writes, and which copies those writes to the other Replication Nodes in the shard.

QUESTION 4

Your customer is spending a lot of money on archiving data to comply with government regulations to retain data for 10 years.
How should you reduce your customer’s archival costs?

A. Denormalize the data.
B. Offload the data into Hadoop.
C. Use Oracle Data Integrator to improve performance.
D. Move the data into the warehousing database.

Answer: B

Explanation:
Extend Information Lifecycle Management to Hadoop
For many years, Oracle Database has provided rich support for Information Lifecycle Management (ILM). Numerous capabilities are available for data tiering – or storing data in different media based on access requirements and storage cost considerations.

These tiers may scale from
1) in-memory for real time data analysis,
2) Database Flash for frequently accessed data,
3) Database Storage and Exadata Cells for queries of operational data and
4) Hadoop for infrequently accessed raw and archive data:

QUESTION 5
Your customer keeps getting an error when writing a key/value pair to a NoSQL replica.
What is causing the error?

A. The master may be in read-only mode and as result, writes to replicas are not being allowed.
B. The replica may be out of sync with the master and is not able to maintain consistency.
C. The writes must be done to the master.
D. The replica is in read-only mode.
E. The data file for the replica is corrupt.

Answer: C

Explanation:
Replication Nodes are organized into shards. A shard contains a single Replication Node which is responsible for
performing database writes, and which copies those writes to the other Replication Nodes in the shard. This is called the
master node. All other Replication Nodes in the shard are used to service read-only operations.
Note: Oracle NoSQL Database provides multi-terabyte distributed key/value pair storage that offers scalable throughput
and performance. That is, it services network requests to store and retrieve data which is organized into key-value pairs.