Artificial Intelligence (AI)
Engage with our experts on topics in AI
211 Discussions

Accelerating Genome Workloads Using the OpenVINO™ Integration with TensorFlow

MaryT_Intel
Community Manager
0 0 611

Authors: Mustafa Cavus, Ravi Panchumarthy, Anthony Reina, Yamini Nimmagadda, Prashant Shah, Sesh Seshagiri, Arindam Paul

Guest Authors from Broad Institute: Samuel Friedman, Geraldine Van der Auwera

In this blog post, we’ll show how we accelerated the performance of the Genome Analysis Toolkit (GATK), a popular genome sequencing workload, using the OpenVINO integration with TensorFlow. GATK was developed by the Broad Institute, who has worked with us closely on this blog.

By changing just two lines of code, we achieved a speed-up of 21 percent on an Intel® Core i5 processor; and 16.8 percent on an Intel® Xeon® processor. Further optimizations increased the speed-up to 32 percent and 19.5 percent respectively.

We’ll show you how we achieved our results, and how you can replicate them.

  Intel® Core i5 processor
Speed-Up

 
Intel® Xeon® processor
Speed-Up
OpenVINOTM integration with TensorFlow 21% 16.8%
OpenVINOTM integration with TensorFlow, with optimized inferencing code 32% 19.5%


Table 1: Speed-Ups achieved on GATK using the OpenVINO integration with TensorFlow

Introducing OpenVINO integration with TensorFlow


The OpenVINO toolkit was released in 2018. Using the toolkit, developers can optimize the performance of deep learning models on Intel® architecture. Developers use the OpenVINO API to develop their applications, and can import models from TensorFlow, PyTorch, and ONNX. 

Now, Intel has released the OpenVINO integration with TensorFlow. Using it, developers can improve the performance of inferencing on Intel architecture, while still using TensorFlow APIs. 

No explicit or offline model conversions are required. Optimizations are carried out inline, with only two lines of code required to import and configure OpenVINO. Using the OpenVINO integration requires a minimal amount of additional memory and disk space.

A wide range of TensorFlow models and operators are supported, with more still being added. Accuracy is nearly identical to the original model, as we show later in this blog post. 

The OpenVINO integration with TensorFlow partitions TensorFlow graphs into multiple subgraphs. These subgraphs are then dispatched to the OpenVINO runtime for accelerated inferencing or to the TensorFlow runtime if OpenVINO does not support the subgraph. The results are combined at the end to provide the final inferencing results. Figure 1 shows the workflow.

Figure 1: Workflow diagram showcasing OpenVINO™ integration with TensorFlowFigure 1: Workflow diagram showcasing OpenVINO™ integration with TensorFlow

The OpenVINO integration with TensorFlow accelerates inference on a range of Intel® processors such as:

  • Intel® CPU (e.g., Core, Xeon including the newer SKUs, e.g. Tiger Lake, Ice Lake)
  • Intel® integrated GPUs
  • Intel® Movidius™ Vision Processing Units (VPUs)
  • Intel® Vision Accelerator Design with 8 Intel® Movidius™ Myriad™ X VPUs (VAD-M) 

The OpenVINO integration with TensorFlow is designed for developers who want to use the OpenVINO toolkit to enhance inferencing performance with minimal code modifications. For maximum performance, efficiency, tooling customization, and hardware control, we recommend adopting the full Intel® Distribution of OpenVINO™ toolkit, using its native OpenVINO APIs and the native OpenVINO runtime.

How to add the OpenVINO integration with TensorFlow


To accelerate TensorFlow performance with the integrated OpenVINO toolkit, add these two lines to your Python code (see Figure 2):

import openvino_tensorflow
openvino_tensorflow.set_backend('<backend_name>') 


Supported backends include 'CPU', 'GPU', 'MYRIAD', and 'VAD-M'. You can only use one backend at a time in the OpenVINO integration with TensorFlow. The full Intel® Distribution of OpenVINO toolkit enables multiple different backends to be used simultaneously.

Figure 2: Code snippet showing how easy it is to use different hardware with the openvino_tensorflow package.Figure 2: Code snippet showing how easy it is to use different hardware with the openvino_tensorflow package.

Accelerating the performance of gene sequencing inference


Terra is a cloud platform for biomedical researchers to access data, run analysis tools, and collaborate. The platform is open source and is co-developed by the Broad Institute of MIT and Harvard, Microsoft, and Verily. The Terra platform includes GATK, the world’s most widely used open-source toolkit for variant calling. Variant calling is the process for identifying differences between genome sequences. GATK was developed by the Broad Institute.

CNNScoreVariants is one of the deep learning tools included in GATK. It uses a Convolutional Neural Network (CNN) to annotate gene sequence variations. We’ll show you how to accelerate inference performance of CNNScoreVariants without losing accuracy using the new OpenVINO™ integration with TensorFlow.

Installing CNNScoreVariants using the OpenVINO integration with TensorFlow


Figure 3 shows the workflow of CNNScoreVariants when used with the OpenVINO™ integration with TensorFlow. 
Figure 3: High level overview of the CNNScoreVariants workflow.Figure 3: High level overview of the CNNScoreVariants workflow. 

You can access this workflow in several ways, which we will describe in this blog.
 

  • The Terra platform includes a docker image you can use with a Jupyter Notebook
     
  • You can run CNNScoreVariants locally using our Docker image
     
  • You can download, build, and install GATK locally on a Linux server

Setting up CNNScoreVariants on Terra Platform


In the Terra platform, the OpenVINO integration with TensorFlow image is available under the Community-Maintained Jupyter images. Please follow the steps below to use the docker image on the Terra platform.

1. Log in to the Terra platform.

2. Click the menu icon at the top-left corner of the page and choose “Workspaces”.

ai-blog-openvino-genome-fig04.PNG

3. Click the “+” icon to create a new workspace.

ai-blog-openvino-genome-fig05.PNG
4. Choose a workspace name and select the billing project. Then, click the “CREATE WORKSPACE” button.

ai-blog-openvino-genome-fig06.PNG
5. Once the workspace opens, click the “Update cloud information” button on the top-right corner of the page.

ai-blog-openvino-genome-fig07.PNG
6. Use the drop-down menu “Application configuration” to choose the docker image called “OpenVINO™ integration with TensorFlow (openvino-tensorflow 0.5.0, Python 3.7.10, GATK 4.2.0.0)”. Then, click to the “CREATE” button. Please note that the versions listed might change as we keep updating the images.

ai-blog-openvino-genome-fig08.png
7. To start the cloud environment, click to the start button at the top-right corner of the page. Starting the cloud environment might take a few minutes.

ai-blog-openvino-genome-fig09.png
8. Once the cloud environment starts, choose the “NOTEBOOKS” tab and create a new notebook. Choose a name for your notebook and choose Python 3 as language.

ai-blog-openvino-genome-fig12.PNG
9. Once the notebook is created, click the notebook that you created and then select “EDIT”. To test the performance of the OpenVINO integration with TensorFlow, run the sample notebook,  GATK-OVTF-Notebook.ipynb.  

ai-blog-openvino-genome-fig13.PNG

Setting up CNNScoreVariants locally using Docker 


Docker provides a quick way to run the optimized CNNScoreVariants with the OpenVINO™ integration with TensorFlow. There are two ways to obtain the Docker image. 

Method 1: Pull the docker image and Run it.

  1. Use us.gcr.io/broad-dsp-gcr-public/terra-jupyter-gatk-ovtf:latest. See CHANGELOG.md for other versions if needed.
    docker pull us.gcr.io/broad-dsp-gcr-public/terra-jupyter-gatk-ovtf:latest
  2. Run the docker image.
    docker run --rm -it -p 8000:8000 us.gcr.io/broad-dsp-gcr-public/terra-jupyter-gatk-ovtf:latest
  3. In a browser, visit http://localhost:8000/notebooks to access the Jupyter UI.
     

Method 2: Build the docker image locally and Run it

  1. Clone the repository. 
    git clone https://github.com/DataBiosphere/terra-docker
  2. Change directory.
    cd terra-docker/terra-jupyter-gatk-ovtf 
  3. Build the docker image
    docker build . -t terra-jupyter-gatk-ovtf
  4. Run the docker image.
    docker run --rm -it -p 8000:8000 terra-jupyter-gatk-ovtf
  5. In a browser, visit http://localhost:8000/notebooks to access the Jupyter UI.

To test the performance of the OpenVINO integration with TensorFlow, run the sample notebook - GATK-OVTF-Notebook.ipynb. Find out more about using the Jupyter notebook below. 


NOTE: You can gain root access and open a bash terminal as follows:

docker run --rm -it -u root -p 8000:8000 --entrypoint /bin/bash terra-jupyter-gatk-ovtf

Note: To disable the OpenVINO™ integration with TensorFlow and run GATK on standard TensorFlow, set the OPENVINO_TF_DISABLE environment variable to 1.

export OPENVINO_TF_DISABLE =”1”

Setting up CNNScoreVariants by installing locally on a Linux Server


In this section, we will show you how to install CNNScoreVariants with OpenVINO™ integration with TensorFlow on a local Linux server.

  1. Clone the repository and check out tag 4.2.0.0. 
    git clone https://github.com/broadinstitute/gatk.git
    git checkout 4.2.0.0
     
  2. Build GATK.
    ./gradlew bundle
  3. After building GATK, an executable named as ‘gatk’ should be created in the project folder. Additionally, you need to either install gatktools or set the PYTHONPATH to the ‘gatktools’ Python package.

    Option 1: To install gatktools using pip:
    pip install build/gatkPythonPackageArchive.zip
    Option 2: Set the Python path to the gatktools source:

    export PYTHONPATH=$PYTHONPATH:
    <gatk_directory_path>/src/main/python/org/broadinstitute/hellbender
     
  4. Install TensorFlow and OpenVINO integration with TensorFlow
    Note: Verify the supported version of Tensorflow at openvino-tensorflow PyPi project page.
    pip install --force-reinstall tensorflow==2.5.0 
    pip install openvino_tensorflow
     
  5. We need to upgrade CNNScoreVariants to TensorFlow 2.x. Open this file in a code editor:
    src/main/python/org/broadinstitute/hellbender/vqsr_cnn/vqsr_cnn/models.py
     
  6. Add the line below to import TensorFlow.
    import tensorflow as tf
     
  7. Use the table below to update the lines shown to their new versions.
Old Version New Version
import keras.backend as K

import tensorflow.compat.v1.keras.backend as K

cfg = K.tf.ConfigProto(intra_op_parallelism_threads=intra_ops, inter_op_parallelism_threads=inter_ops) cfg = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=intra_ops, inter_op_parallelism_threads=inter_ops)
K.set_session(K.tf.Session(config=cfg))

K.set_session(tf.compat.v1.Session(config=cfg))

Measuring the speed-up from the OpenVINO integration


To measure the speed-up from the OpenVINO integration with TensorFlow, we ran CNNScoreVariants twice. The first time, we ran it using standard TensorFlow. The second time, we ran it using the OpenVINO integration with TensorFlow.

Running with standard TensorFlow

If you did not set the PYTHONPATH to the ‘gatktools’ source directory, you need the rebuild GATK and reinstall the gatktools Python package before running this command. 

gatk CNNScoreVariants \
        -I gs://gatk-tutorials/workshop_2002/2-germline/CNNScoreVariants/bams/g94982_chr20_1m_10m_bamout.bam \
        -V gs://gatk-tutorials/workshop_2002/2-germline/CNNScoreVariants/vcfs/g94982_b37_chr20_1m_15871.vcf.gz \
        -R gs://gcp-public-data--broad-references/hg19/v0/Homo_sapiens_assembly19.fasta \
        -O data/my_2d_cnn_scored.vcf \
        --tensor-type read_tensor \
        --transfer-batch-size 256 \
        --inference-batch-size 256


The last few lines of the output should look like this:

...
03:15:56.984 INFO  ProgressMeter -           20:8840293              8.6                 31742           3697.5
03:15:56.984 INFO  ProgressMeter - Traversal complete. Processed 31742 total variants in 8.6 minutes.
03:15:56.984 INFO  CNNScoreVariants - Done scoring variants with CNN.
03:15:57.073 INFO  CNNScoreVariants - Shutting down engine
[May 18, 2021 at 3:15:57 a.m. PDT] org.broadinstitute.hellbender.tools.walkers.vqsr.CNNScoreVariants done. Elapsed time: 8.67 minutes.
Runtime.totalMemory()=364904448


The overall throughput of this execution is 3697.5 variants per minute, as shown in bold above.


Running with the OpenVINO integration with TensorFlow


As mentioned before, to enable the OpenVINO integration with TensorFlow, we need to simply add two lines of code to import the OpenVINO integration and set the backend in inference.py.

import openvino_tensorflow
openvino_tensorflow.set_backend(‘CPU’)

Run CNNScoreVariants with the same parameters to see the performance improvement.

gatk CNNScoreVariants \
        -I gs://gatk-tutorials/workshop_2002/2-germline/CNNScoreVariants/bams/g94982_chr20_1m_10m_bamout.bam \
        -V gs://gatk-tutorials/workshop_2002/2-germline/CNNScoreVariants/vcfs/g94982_b37_chr20_1m_15871.vcf.gz \
        -R gs://gcp-public-data--broad-references/hg19/v0/Homo_sapiens_assembly19.fasta \
        -O data/my_2d_cnn_scored.vcf \
        --tensor-type read_tensor \
        --transfer-batch-size 256 \
        --inference-batch-size 256


The last few lines of the output should like this:

03:25:06.333 INFO  ProgressMeter -           20:8840293              7.1                 31742           4456.8
03:25:06.333 INFO  ProgressMeter - Traversal complete. Processed 31742 total variants in 7.1 minutes.
03:25:06.333 INFO  CNNScoreVariants - Done scoring variants with CNN.
03:25:06.429 INFO  CNNScoreVariants - Shutting down engine
[May 18, 2021 at 3:25:06 a.m. PDT] org.broadinstitute.hellbender.tools.walkers.vqsr.CNNScoreVariants done. Elapsed time: 7.60 minutes.
Runtime.totalMemory()=315621376


The overall throughput of this execution is 4456.8, which is 20 percent higher than the baseline throughput.

Running CNNScoreVariants on Jupyter Notebook 


We also provided a sample Jupyter notebook - GATK-OVTF-Notebook.ipynb. The notebook executes the following steps: 
 

  1. The notebook uses the command below to run CNNScoreVariants on the native TensorFlow runtime.
     
    OPENVINO_TF_DISABLE=”1” \
    gatk CNNScoreVariants \
            -I gs://gatk-tutorials/workshop_2002/2-germline/CNNScoreVariants/bams/g94982_chr20_1m_10m_bamout.bam \
            -V gs://gatk-tutorials/workshop_2002/2-germline/CNNScoreVariants/vcfs/g94982_b37_chr20_1m_15871.vcf.gz \
            -R gs://gcp-public-data--broad-references/hg19/v0/Homo_sapiens_assembly19.fasta \
            -O my_2d_cnn_scored.vcf \
            --tensor-type read_tensor \
            --transfer-batch-size 256 \
            --inference-batch-size 256
  2. The notebook uses the command below to run CNNScoreVariants using the OpenVINO integration with TensorFlow.
     
    gatk CNNScoreVariants \
            -I gs://gatk-tutorials/workshop_2002/2-germline/CNNScoreVariants/bams/g94982_chr20_1m_10m_bamout.bam \
            -V gs://gatk-tutorials/workshop_2002/2-germline/CNNScoreVariants/vcfs/g94982_b37_chr20_1m_15871.vcf.gz \
            -R gs://gcp-public-data--broad-references/hg19/v0/Homo_sapiens_assembly19.fasta \
            -O my_2d_cnn_scored.vcf \
            --tensor-type read_tensor \
            --transfer-batch-size 256 \
            --inference-batch-size 256


When you run the notebook, you’ll see the test results using standard TensorFlow, followed by those using the OpenVINO integration with TensorFlow.

Improving performance further with code optimizations for CNNScoreVariants


We saw a significant improvement with just two additional lines of code, as shown above. Now, we’ll share some additional tweaks we made to further improve the CNNScoreVariants workload. 

We have shared all these optimizations as a patch in the GitHub repository and in the Dockerfile, so you can replicate these results easily.

Please, apply the changes below before building GATK (see the steps for building GATK in section “Setting up CNNScoreVariants by installing locally on a Linux Server”).


Freezing the Keras model


Freezing the Keras model will clear the training ops from the model and prepare it for deployment. This will improve the model coverage of the OpenVINO integration and will provide even better performance. Follow the steps below to update the “src/main/python/org/broadinstitute/hellbender/vqsr_cnn/vqsr_cnn/inference.py” file.

  1. Create global variables for the TensorFlow session and the output tensor by adding the instructions in bold below:
     
    ...
    VARIANT_TYPE_FIFO_INDEX = 6
    VARIANT_FIFO_FIELDS = 7
    
    session = None
    out_tensor = None
    
    CIGAR_CODES_TO_COUNT = [
    ...

     
  2. Add the two lines in bold below near the beginning of the function ‘score_and_write_batch’:
     
    ...
    variant_data = []
    read_batch = []

    global session
    global out_tensor

    for _ in range(batch_size):
    ...
     
  3. Replace the line shown in bold below.
     
    ...
    elif tensor_type in defines.TENSOR_MAPS_2D:
        predictions = model.predict(
        [np.array(read_batch), np.array(annotation_batch)],
        batch_size=python_batch_size)
    else:
        raise ValueError('Unknown tensor mapping.  Check architecture file.',
        tensor_type)
    ...

    With the code below
     
    ...
    elif tensor_type in defines.TENSOR_MAPS_2D:
        if session is None:
           full_model = tf.function(lambda x: model(x))
           full_model = full_model.get_concrete_function(
              	(tf.TensorSpec(model.inputs[0].shape,
                    model.inputs[0].dtype, name="read_tensor"),
                    tf.TensorSpec(model.inputs[1].shape,
                    model.inputs[1].dtype, name="best_practices")))
           frozen_func = convert_variables_to_constants_v2(full_model)
           frozen_func.graph.as_graph_def()
           session = tf.compat.v1.Session(graph=frozen_func.graph)
           prob_tensor = frozen_func.graph.get_tensor_by_name(
                    "model_1/softmax_predictions/Softmax:0")
           batch = {}
           batch["read_tensor:0"] = np.array(read_batch)
           batch["best_practices:0"] = np.array(annotation_batch)
           predictions = session.run(prob_tensor, feed_dict=batch)
        else:
           raise ValueError('Unknown tensor mapping.  Check architecture file.',
                    tensor_type)
    ...

     

Fill the final batch


CNNScoreVariants uses batched input data for inference execution. The batch size was specified by the user by setting the batch parameter. Then, the input data will be divided into small batches of data and each inference iteration will perform on a batch of data until the whole input data is consumed. However, the total size of the input data may not be the multiple of the specified batch size. This may result the last batch of data to have a smaller batch size than the previous batches. This change in the batch size impacts the input shape of the data. It requires the model to be recompiled for the modified input shape which causes an extra latency. To overcome this issue, we can simply fill the batch with fake data to match the batch size of the previous inference iterations. This can prevent the recompilation and result in a better performance. Also, it will not impact the output since we can simply ignore the outputs computed from the fake data. 

The changes below are made in the file “src/main/python/org/broadinstitute/hellbender/vqsr_cnn/vqsr_cnn/inference.py”.

1.    Add the code in bold below before the line ‘batch = {}’

...
        read_batch.append(tensor)

    for i in range(python_batch_size-batch_size):
        tensor = np.empty(read_batch[0].shape)
        read_batch.append(tensor)
        tensor = np.empty(annotation_batch[0].shape)
        annotation_batch.append(tensor)

    batch = {}
...


Seeing the performance improvement


Now run CNNScoreVariants with the same parameters again to see the further improvement in the performance:

gatk CNNScoreVariants \
        -I gs://gatk-tutorials/workshop_2002/2-germline/CNNScoreVariants/bams/g94982_chr20_1m_10m_bamout.bam \
        -V gs://gatk-tutorials/workshop_2002/2-germline/CNNScoreVariants/vcfs/g94982_b37_chr20_1m_15871.vcf.gz \
        -R gs://gcp-public-data--broad-references/hg19/v0/Homo_sapiens_assembly19.fasta \
        -O data/my_2d_cnn_scored.vcf \
        --tensor-type read_tensor \
        --transfer-batch-size 256 \
        --inference-batch-size 256


The last few lines of the output should look this:

03:36:44.913 INFO  ProgressMeter -           20:8840293              6.5                 31742           4863.9
03:36:44.913 INFO  ProgressMeter - Traversal complete. Processed 31742 total variants in 6.5 minutes.
03:36:44.913 INFO  CNNScoreVariants - Done scoring variants with CNN.
03:36:44.969 INFO  CNNScoreVariants - Shutting down engine
[May 18, 2021 at 3:36:44 a.m. PDT] org.broadinstitute.hellbender.tools.walkers.vqsr.CNNScoreVariants done. Elapsed time: 7.16 minutes.
Runtime.totalMemory()=434110464


The throughput of this execution is 4863.9 variants per minute, which is 31 percent higher than the baseline throughput.

Performance results


We measured the performance using an Intel® Core™ i5 processor on the desktop and an Intel® Xeon® processor on the Terra platform.

We used the “g94982_chr20_1m_10m_bamout.bam” dataset. You can find it here:

gs://gatk-tutorials/workshop_2002/2
-germline/CNNScoreVariants/bams/g94982_chr20_1m_10m_bamout.bam

We saw significant improvement by simply adopting the OpenVINO integration with TensorFlow, which required just two lines of code. We saw even greater improvements with the code optimizations described above. Table 2 below summarizes the results.

 

Intel® CoreTM i5 processor
(4 cores, 8GB RAM)

Intel® Xeon® processor
(4 Cores, 15GB RAM)

Framework used to run CNNScoreVariants workload

Throughput (Variants/
Min)

Speedup

Throughput (Variants/
Min)

Speedup

Standard TensorFlow (v2.4.1)

3697.5

-

2037.0

-

OpenVINOTM integration with TensorFlow (v2.4.1, v0.5.0)

4456.8

21%

2380.2

16.8%

OpenVINOTM integration with TensorFlow (v2.4.1, v0.5.0) with Optimized CNNScoreVariants

4863.9

32%

2433.8

19.5%

Table 2: Performance for CNNScoreVariants measured on Intel® Core i5 processor and Intel® Xeon® processor


The time saving was:
 

  • 1.51 minutes saved over 8.67 mins of execution (~10 mins per hour), on the Intel Core i5 processor
  • 2.51 minutes saved over 15.72 mins of execution (~9 mins per hour) on the Intel Xeon processor

Since inferencing with a large dataset can take days, the OpenVINO integration with TensorFlow can save significant inference time and reduce the associated cloud costs for GATK CNN developers.

 
Accuracy 


We compared the output files generated using standard TensorFlow and the OpenVINO integration with TensorFlow. Three samples out of 15870 had a variation in the 3rd significant digit in the CNN_2D feature. However, the final classification is identical. 

Here is one of the three samples that showed a variation:

#CHROM    POS    ID    REF    ALT    QUAL    FILTER    INFO    FORMAT    NA12878

Standard TensorFlow

20  1834907 rs149222    G   A   212.77  PASS    AC=1;AF=0.500;AN=2;BaseQRankSum=0.960;CNN_2D=3.466;ClippingRankSum=0.000;DB;DP=30;ExcessHet=3.0103;FS=1.447;MLEAC=1;MLEAF=0.500;MQ=60.00;MQRankSum=0.000;POSITIVE_TRAIN_SITE;QD=7.34;ReadPosRankSum=-0.787;SOR=0.998;VQSLOD=22.20;culprit=MQ    GT:AD:DP:GQ:PL  0/1:18,11:29:99:241,0,441
 

OpenVINO integration with TensorFlow

20  1834907 rs149222    G   A   212.77  PASS    AC=1;AF=0.500;AN=2;BaseQRankSum=0.960;CNN_2D=3.467;ClippingRankSum=0.000;DB;DP=30;ExcessHet=3.0103;FS=1.447;MLEAC=1;MLEAF=0.500;MQ=60.00;MQRankSum=0.000;POSITIVE_TRAIN_SITE;QD=7.34;ReadPosRankSum=-0.787;SOR=0.998;VQSLOD=22.20;culprit=MQ    GT:AD:DP:GQ:PL  0/1:18,11:29:99:241,0,441

Find out more

 

About the Broad Institute


The Broad Institute of MIT and Harvard was founded in 2003 to empower creative scientists to transform medicine with new genome-based knowledge. The Broad Institute seeks to describe the molecular components of life and their connections; discover the molecular basis of major human diseases; develop effective new approaches to diagnostics and therapeutics; and disseminate discoveries, tools, methods and data openly to the entire scientific community.

Founded by MIT, Harvard and its affiliated hospitals, and the visionary Los Angeles philanthropists Eli and Edythe L. Broad, the Broad Institute includes faculty, professional staff and students from throughout the MIT and Harvard biomedical research communities and beyond. The institute collaborates with more than a hundred private and public institutions in more than 40 countries worldwide. For further information about the Broad Institute, go to www.broadinstitute.org.


 

Notices & Disclaimers

Performance varies by use, configuration and other factors. Learn more at www.intel.com/PerformanceIndex ;

Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates.  See backup for configuration details.  No product or component can be absolutely secure. 

Your costs and results may vary. 

Intel technologies may require enabled hardware, software or service activation.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

© Intel Corporation.  Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.  Other names and brands may be claimed as the property of others.  

About the Author
Mary is the Community Manager for this site. She likes to bike, and do college and career coaching for high school students in her spare time.