Microsoft Design and Implement Big Data Analytics Solutions (070-475) Free Practice Test
Question 1
You are building an Azure Analysis Services cube.
The source data for the cube is located on premises in a Microsoft SQL Server database. You need to ensure that the Azure Analysis Services service can access the source data. What should you deploy to your Azure subscription?
The source data for the cube is located on premises in a Microsoft SQL Server database. You need to ensure that the Azure Analysis Services service can access the source data. What should you deploy to your Azure subscription?
Correct Answer: B
Explanation: Only visible for TestSimulate members. You can sign-up / login (it's free).
Question 2
Your company supports multiple Microsoft Azure subscriptions.
You plan to deploy several virtual machines to support the services in Azure.
You need to automate the management of all the subscriptions. The solution must minimize administrative effort.
Which two cmdlets should you run? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
You plan to deploy several virtual machines to support the services in Azure.
You need to automate the management of all the subscriptions. The solution must minimize administrative effort.
Which two cmdlets should you run? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Correct Answer: B,C
Question 3
You need to design the data load process from DB1 to DB2. Which data import technique should you use in the design?
Correct Answer: C
Question 4
Your company has 2000 servers.
You plan to aggregate all of the log files from the servers in a central repository that uses Microsoft Azure HDInsight. Each log file contains approximately one million records. All of the files use the .log file name extension.
The following is a sample of the entries in the log files.
2017-02-03 20:26:41 SampleClass3 (ERROR) verbose detail for id 1527353937 In Apache Hive, you need to create a data definition and a query capturing tire number of records that have an error level of [ERROR].
What should you do? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

You plan to aggregate all of the log files from the servers in a central repository that uses Microsoft Azure HDInsight. Each log file contains approximately one million records. All of the files use the .log file name extension.
The following is a sample of the entries in the log files.
2017-02-03 20:26:41 SampleClass3 (ERROR) verbose detail for id 1527353937 In Apache Hive, you need to create a data definition and a query capturing tire number of records that have an error level of [ERROR].
What should you do? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Correct Answer:

Explanation

Box 1: table
Box 2: /t
Apache Hive example:
CREATE TABLE raw (line STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n'; Box 3: count(*) Box 4: '*.log'
Question 5
You have an Apache Hive cluster in Microsoft Azure HDInsight. The cluster contains 10 million data files.
You plan to archive the data.
The data will be analyzed monthly.
You need to recommend a solution to move and store the data. The solution must minimize how long it takes to move the data and must minimize costs.
Which two services should you include in the recommendation? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
You plan to archive the data.
The data will be analyzed monthly.
You need to recommend a solution to move and store the data. The solution must minimize how long it takes to move the data and must minimize costs.
Which two services should you include in the recommendation? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Correct Answer: B,C
Explanation: Only visible for TestSimulate members. You can sign-up / login (it's free).
Question 6
You plan to deploy a Microsoft Azure Data Factory pipeline to run an end-to-end data processing workflow.
You need to recommend winch Azure Data Factory features must be used to meet the Following requirements:
Track the run status of the historical activity.
Enable alerts and notifications on events and metrics.
Monitor the creation, updating, and deletion of Azure resources.
Which features should you recommend? To answer, drag the appropriate features to the correct requirements.
Each feature may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.

You need to recommend winch Azure Data Factory features must be used to meet the Following requirements:
Track the run status of the historical activity.
Enable alerts and notifications on events and metrics.
Monitor the creation, updating, and deletion of Azure resources.
Which features should you recommend? To answer, drag the appropriate features to the correct requirements.
Each feature may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.

Correct Answer:

Explanation

Box 1: Azure Hdinsight logs
Logs contain historical activities.
Box 2: Azure Data Factory alerts
Box 3: Azure Data Factory events
Question 7
You are developing a solution to ingest data in real-time from manufacturing sensors. The data will be archived. The archived data might be monitored after it is written.
You need to recommend a solution to ingest and archive the sensor data. The solution must allow alerts to be sent to specific users as the data is ingested.
What should you include in the recommendation?
You need to recommend a solution to ingest and archive the sensor data. The solution must allow alerts to be sent to specific users as the data is ingested.
What should you include in the recommendation?
Correct Answer: C
Question 8
You have a Microsoft Azure data factory named ADF1 that contains a pipeline named Pipeline1.
You plan to automate updates to Pipeline1.
You need to build the URL that must be called to update the pipeline from the REST API.
How should you complete the URL? To answer, drag the appropriate URL elements to the correct locations.
Each URL element may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.

You plan to automate updates to Pipeline1.
You need to build the URL that must be called to update the pipeline from the REST API.
How should you complete the URL? To answer, drag the appropriate URL elements to the correct locations.
Each URL element may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.

Correct Answer:

Question 9
A Company named Fabrikam, Inc. has a web app. Millions of users visit the app daily.
Fabrikam performs a daily analysis of the previous day's logs by scheduling the following Hive query.

You need to recommend a solution to gather the log collections from the web app.
What should you recommend?
Fabrikam performs a daily analysis of the previous day's logs by scheduling the following Hive query.

You need to recommend a solution to gather the log collections from the web app.
What should you recommend?
Correct Answer: A
Question 10
You need to recommend a permanent Azure Storage solution for the activity data. The solution must meet the technical requirements.
What is the best recommendation to achieve the goal? More than one answer choice may achieve the goal.
Select the BEST answer.
What is the best recommendation to achieve the goal? More than one answer choice may achieve the goal.
Select the BEST answer.
Correct Answer: B