TIME |
MONDAY |
THURSDAY |
FRIDAY |
Contents |
|
---|---|---|---|---|---|
8:40 10:30 |
|
|
|
||
10:40 12:30 |
|
|
|
||
12:40 14:30 |
|
|
|
||
14:40 16:30 |
|
|
|
||
16:40 18:30 |
|
|
|
|
|
CENG 505 (T) INT2 |
|||||
18:40 20:30 |
CENG 505 (T & L) INT2 |
|
|
|
|
Instructoroffice: Computer
Engineering Department, A318 |
TA
office:Computer
Engineering Department, |
Watch this space for the latest updates (If the characters do not show properly, please try viewing this page with Unicode (UTF-8) encoding). Last updated:
Murat Altun - Storage Area Network (SAN)
Çağlar Günel - Design Patterns for Parallel Programming
Derya Akbulut - Parallel Algorithms for 2D Cutting Stock Problems
Atıl Kurt - Parallel Programming for Scheduling Algorithms
Funda Karabak - Branch and Bound Algorithms with Parallel Computing
Nizamettin Doğan Güner - Supercomputers Around the World
Duygu Özen - Travelling Salesman Problem
Mesut Körpe - Grid Computing
Özgür Pekçağlıyan - Parallelism In Databases
Sait Mutlu -Implementing MPI on Windows and Unix, Performance Comparison
Erdem Genç – Efficiency of Parallel Programming Using Microsoft Technologies
Tan Atagören - GPGPU (General-Purpose Computing On Graphics Processing Units)
Compress all files to one file. Name this file as “yourstudentID.zip(/tar/rar/tgz/gz)” otherwise your assignment will not be evaluated.
When this file decompressed, all files should reside in a directory named with “yourstudentID”. otherwise your assignment will not be evaluated.
You should submit your code and results (tables, plots, comments,...) both via e-mail and in paper to me. otherwise your assignment will not be evaluated.
This course provides an introduction to parallel and distributed computing and practical experiences in writing parallel programs on a cluster of computers. You will learn about the following topics:
Parallel Computers
Message Passing Computing
Embarrassingly Parallel Computations
Partitioning and Divide-and-Conquer Strategies
Pipelined Computations
Synchronous Computations
Load Balancing
Programming with Shared Memory
Topics might be classified into two main parts as;
Parallel computers: architectural types, shared memory, message passing, interconnection networks, potential for increased speed.
Basic techniques: embarrassingly parallel computations, partitioning and divide and conquer, pipelined computations, synchronous computations, load balancing, shared memory programming.
There is one group for lecturing. You will be expected to do significant programming assignments, as well as run programs we supply and analyze the output. Since we will program in C on a UNIX environment, some experience using C on UNIX will be important. We will provide tutorials for basic C on UNIX during the first few class periods.
In lab sessions, we will concentrate upon the message-passing method of parallel computing and use the standard parallel computing environment called MPI (Message Passing Interface). Thread-based programming will also be outlined, and the distributed shared memory (DSM) approach (If we have enough time). Each student will complete a project based on parallel computing for the laboratory study.
Also, each student will complete a project based on parallel computing, (distributed computing, cluster computing) for the midterm exam.
Important announcements will be posted to the Announcements section of this web page above, so please check this page frequently. You are responsible for all such announcements, as well as announcements made in lecture.
Principles of Parallel Programming, by C. Lin and L. Snyder, Addison-Wesley 2009, ISBN 0-32-148790-7.
Parallel Programming: Techniques and Application Using Networked Workstations and Parallel Computers, 2nd edition, by B. Wilkinson and M. Allen, Prentice Hall Inc., 2005, ISBN 0-13-140563-2.
Beowulf Cluster Computing with Linux, 2nd edition, edited by William Gropp, Ewing Lusk, Thomas Sterling, MIT Press, 2003, ISBN 0-262-69292-9.
Beowulf Cluster Computing with Windows, Thomas Sterling , MIT Press, 2001, ISBN 0-262-69275-9.
Using MPI , Portable Parallel Programming with the Message Passing Interface, William Gropp, Ewing Lusk and Anthony Skjellum, The MIT Press, 1999, ISBN 0-262-57132-3.
Using MPI-2, Advanced Features of the Message Passing Interface, William Gropp, Ewing Lusk, Rajeev Thakur, The MIT Press, 1999, ISBN 0-262-57133-1.
MPI: The Complete Reference (Vol. 1) - The MPI Core, Marc Snir, Steve Otto, Steven Huss-Lederman, David Walker and Jack Dongarra, The MIT Press, 1998, ISBN 0-262-69215-5.
MPI: The Complete Reference (Vol. 2) - The MPI-2 Extensions, William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing Lusk, Bill Nitzberg, William Saphir and Marc Snir, The MIT Press, 1998, ISBN 0-262-57123-4.
In Search of Clusters: The ongoing battle in lowly parallel computing, Second Edition, by Gregory F. Pfister, Prentice Hall Publishing Company, 1998, ISBN: 0-13-899709-8.
How to Build a Beowulf – A Guide to the Implementation and Application of PC Clusters, by Thomas Sterling, John Salmon, Donald J. Becker and Daniel F. Savarese, MIT Press, 1999, ISBN 0-262-69218-X.
PVM: Parallel Virtual Machine, A Users' Guide and Tutorial for Network Parallel Computing, Al Geist, Adam Beguelin, Jack Dongarra, Weicheng Jiang, Robert Manchek and Vaidyalingam S. Sunderam, MIT Press, 1994, ISBN 0-262-57108-0.
This texts are only recommended rather than required. This books are useful for reference, for an alternative point of view.
Some materials are given. Please inform me about the usefullness of the materials. Check this place for updates.
The following references are available online
There will be a final exam: 40%
Term Project as Midterm exam: 25%
Term Project as Lab. exam: 25%
Attendance is REQUIRED and constitutes part of your course grade; 10%.
Attendance is COMPULSORY, and you are responsible for everything said in class.
I encourage you to ask questions in class. You are supposed to ask questions. Don't guess, ask a question!
You may discuss homework problems with classmates (although it is not to your advantage to do so).
You can use ideas from the literature (with proper citation).
You can use anything from the textbook/notes.
The code you submit must be written completely by you.
The following schedule is tentative; it may be updated later in the semester, so check back here frequently.
Week |
Dates |
Topic |
Lecture Notes |
Hands-On |
|||
---|---|---|---|---|---|---|---|
Class |
Handout |
||||||
|
html |
||||||
Reading |
html |
||||||
Lectures |
|||||||
1 |
September 22-26, 2010 |
First Meeting & Introduction |
Class pdf |
||||
Reading pdf |
|||||||
2 |
October 4-8, 2010 |
Introduction I |
Class pdf |
||||
Reading pdf |
|||||||
3 |
October 11-15, 2010 |
Performance Analysis |
Class pdf |
||||
Reading pdf |
|||||||
4 |
October 18-22, 2010 |
Programming Using the Message-Passing Paradigm I |
Class pdf |
||||
Reading pdf |
|||||||
5 |
October 25-29, 2010 |
Programming Using the Message-Passing Paradigm II |
Class pdf |
||||
Reading pdf |
|||||||
6 |
November 1-5, 2010 |
Programming Using the Message-Passing Paradigm III |
Class pdf |
||||
Reading pdf |
|||||||
8 |
November 15-19, 2010 |
15-19 November 2010 Sacrifice Feast Holiday (4.5 days) No Lecture |
|||||
9 |
November 22-26, 2010 |
Programming Using the Shared Memory Paradigm I |
Class pdf |
||||
Reading pdf |
|||||||
10 |
November 29-3, 2010 |
Programming Using the Shared Memory Paradigm II |
Class pdf |
||||
Reading pdf |
|||||||
11 |
December 6-10, 2010 |
Programming Using the Shared Memory Paradigm III |
Class pdf |
||||
Reading pdf |
|||||||
12 |
December 13-17, 2010 |
Programming Using the Shared Memory Paradigm IV |
Class pdf |
||||
Reading pdf |
|||||||
13 |
December 20-24, 2010 |
Network Computing I |
Class pdf |
||||
Reading pdf |
|||||||
14 |
December 27-31, 2010 |
Network Computing II |
Class pdf |
||||
Reading pdf |
|||||||
15 |
January 3-7, 2011 |
Project Presentations |
|
||||
Exams |
|||||||
7 |
November 8-12, 2010 |
Possible Midterm Week Term Projects |
|||||
16 |
January 21, 2011 18:00 A-319 |
Final |
|