Blizzard has announced the development of Diablo 3 at WWI in France last week. There is no release date scheduled as of yet. Generally, they usually take approximately one to two years after the announcement. The diablo 3 gameplay is really cool, fully enhanced with 3d engine, and a lot more. So grab your old diablo 2 cds and install away.
I have seen many debates about which major is more superior than another. The following source below is from University of Buffalo has a better explain of how distingue they are.
What is computer science? Computer science (CS) is the systematic study of algorithmic methods for representing and transforming information, including their theory, design, implementation, application, and efficiency. The discipline emerged in the 1950s from the development of computability theory and the invention of the stored-program electronic computer. The roots of computer science extend deeply into mathematics and engineering. Mathematics imparts analysis to the field; engineering imparts design. The main branches of computer science are the following: Algorithms is the study of effective and efficient procedures of solving problems on a computer. Theory of computation concerns the meaning and complexity of algorithms and the limits of what can be computed in principle. Computer architecture concerns the structure and functionality of computers and their implementation in terms of electronic technologies. Software systems is the study of the structure and implementation of large programs. It includes the study of programming languages and paradigms, programming environments, compilers, and operating systems. Artificial intelligence concerns the computational understanding of what is commonly called intelligent behavior and the creation of artifacts that exhibit such behavior. Other important topics in computer science include computer graphics, databases, networks and protocols, numerical methods, operating systems, parallel computing, simulation and modeling, and software engineering. What is computer engineering? Computer engineering (CEN) is the design and prototyping of computing devices and systems. While sharing much history and many areas of interest with computer science, computer engineering concentrates its effort on the ways in which computing ideas are mapped into working physical systems. Emerging equally from the disciplines of computer science and electrical engineering, computer engineering rests on the intellectual foundations of these disciplines, the basic physical sciences and mathematics. The main branches of computer engineering are the following: Networks is concerned with design and implementation of distributed computing environments, from local area networks to the World Wide Web. Multimedia computing is the blending of data from text, speech, music, still image, video and other sources into a coherent datastream, and its effective management, coding-decoding and display. VLSI systems involves the tools, properties and design of micro-miniaturized electronic devices (Very Large Scale Integrated circuits). Reliable computing and advanced architectures considers how fault-tolerance can be built into hardware and software, methods for parallel computing, optical computing, and testing. Other important topics in computer engineering include display engineering, image and speech processing, pattern recognition, robotics, sensors and computer perception.
If you have purchased your Wii around this time, you probably are going to have a hard time install any mod chip. Due to the fact that they may cut the pins and make it harder to modify the system. Anyway, here is quick way to install DC2key or DC2pro to your Wii by using a PCB as special aid, so you don’t need to sold with wires. it pretty much save you a lot of time. Oh for my disclaimer before you read on.. this is for private use OR educational purpose only, and I don’t take any response if you break the console or for other uses.
First of all you need some solder tools as following.
1. The Rosin core solder 0.032 and 0.022″ diameter from radio shack.
2. The 15 watts solder iron. I don’t recommend higher watts, because the heat is going to burn the board. 3. The Tri-wing screwdriver, you will need this special screwdriver to open the chassis, You can buy this from eBay. 4. The chip and the PCB.
That is it, all you have to do now is sold the mod chip with the PCB, and then place it properly with the d2c chip. Now you need to sold carefully with d2c chip as the pictures below, if you are not comfortable with soldering, you should ask the expert to do for you, because the solder points are very small, such that you have to sold them with the IC pins. Alright, that should be it, now have fun and enjoy your Wii.
One of my favourite modules from Google is a iGoogle, it is personalized homepage just like My Yahoo and MS Live. I have been using since the beginning of this year and have been falling in love with it. The Google Gadgets API is very cool, it allows users to create their own gadgets.
Some of the gadgets that attached for mine, very recommend.
“Quote of the day”
“Places to See Before You Die”
For me, iGoogle helps me organize web things, it is very portable and does add lots of little tips to my everyday life.
I am proudly present a Thai sukiyaki style or we often called “suki”. its very similar to Japanese sukiyaki or Chinese hot pot or shabu shabu. The only major different is we have a tasty spicy dipping source.
Disk scheduling policies with lookahead, A. Thomasian, C. Liu, ACM SIGMETRICS Vol. 30, No. 2, September 2002, pp. 31-40.
Disk scheduling methods that we might already know are concerned with minimizing the seek time, for example, the FCFS and the SSTF methods. However, the summation of both seeks and latency time is more preferable in modern disk. Therefore, the authors introduce some new disk scheduling methods. For example, the SATF policy which takes into account the sum of seek time and latency time is therefore preferable.The authors review the major disk scheduling methods such as FCFS, SSTF, CSCAN, CSCAN-Lai, SATF, SATF, HOL and SATF-RP. They describe the simulation model used to evaluate the relative performance of the disk scheduling methods, and analyze the simulation regarding to those methods. The main contribute is that they extended CSCAN and SATF with look ahead to be able to cope with the dynamic nature of arrivals to the system.
As we might know, we don’t concern a capacity of disk as a major issue like before, and the speed of the seek time became much faster than before. I believe a disk scheduling method is suited for some specific data, it seems to me like there will not be such a method that can optimize all data which is stored in the disk. My question is that they should have a disk scheduling method which acts like the MTLQ (Multi level queue) that we have studied in the early chapter, where we could select right algorithm and move up and down depends on the starvation level. That should be very more interesting. In my opinion, the read and write speed could improve by increasing speed of motor and some more mechanical stuff rather than using scheduling methods, of course there would be some improvement but only minor, since today we don’t feel that the bottleneck of transferring data is occurs at memory device.
For this paper, I had like to rate the significance of this paper as 3/5(modest), because 20% of the paper review the scheduling methods which most of us already know, the simulation doesn’t show us a significant result of improvement of disk utilization, and this should be the most noticeable deficiency of the paper.
To be able improve applicability, scalability, performance and availability in data storage for large data, the authors have implemented and deployed a distributed storage system which is called Bigtable, and this would be the main motivation of the paper. To manage large data, the system provides a simple data model for dynamic control over data layout and format for clients as describe as following paragraph.
For their contributions, the authors have spent roughly seven person-years on design and implementation. They have introduced an interesting model which a map data structure, the concept of row and column families, and time stamps which form the basic unit of access control and so on. Also the refinements and the performance evaluation which describes in the paper have shown an improvement. Three of the real applications or products have success by using the Bigtable implementation and concepts.
The paper’s single most noticeable deficiency already describes by the authors in the paper which are the following. For example, consideration of the possibility of multiple copies of the same data doesn’t count; a permission to let the user tell us what data belongs in memory and what data should stay on the disk rather than trying to determine this dynamically. Lastly, there are no complex queries to execute or optimize. The Bigtable seems to take to another whole level of manipulating the data, however my question is still concerned about the networking such that it seems to me that the latency plays an important role to be able to retrieve or display the result of queries. In my personal opinion, there is still a bottle neck because it is a distribute servers which require a high-performance network infrastructure to achieve the highest performance.
I would rate the significant of the paper 5/5(breakthrough) because of the Bigtable model system is amazing such that it could adapts to handle some very large data, and it has been used in many popular application that we have been using nowadays, for examples, Google products such as Google earth and Google analytics and etc. The concept of adding a new machine when it needs more performance to perform database operations is spectacularly. I believe that the Bigtable will be very useful in future use, and we will most likely to see the next coming products from such companies take this model to approve their use of database.
Reference: Bigtable: A Distributed Storage System for Structured Data, F. Chang, J. Dean, S. Ghemawat, W. Hsieh, D. Wallach, M. Burrows, T. Chandra, A. Fikes, and R. Gruber, Proc. of the 7th Conf. on USENIX Sym. on Operating Systems Design and Implementation, November 2006, pp. 205-218.
Serverless Network File Systems, T. Anderson, M. Dahlin, J. Neefe, D. Patterson, D. Roselli, and R. Wang, Proc. of the 15th ACM Symposium on Operating Systems Principles, December 1995, pp. 109-126.
The authors believe that the traditional central network file system still has a bottle neck, such that all the miss read/write goes through the central server. It is also expensive, such that it requires man to control or operate the server to be able to balance the server loads. Therefore, they have introduced a server less file systems distribute file system server which responsibilities across large numbers of cooperating machines. Ideally, the authors have implemented a prototype serverless network file system called xFS to provide better performance and scalability than traditional file systems.
There are three factors which motivate their work on the implementation of the serverless network file systems: the first one is the opportunity to provided by fast switched LANs, the second one is the expanding demands of users and the last one is the fundamental limitations of central server systems.Taking about their contributions, the authors make two sets of contributions. Firstly, xFs synthesizes a number of recent innovations which provide a basis for serverless file system design. Secondly, they have transformed DASH’s scalable cache consistency approach into a more general, distributed control system that is also fault tolerant. Moreover, they have improved the Zebra to eliminate bottlenecks.
The paper’s single most noticeable deficiency is the limitation of the measurements, such that the workloads are not real workloads, and they are micro benchmarks that provide a better performance in term of parallelism than real workloads. Another limitation of the measurements is that they compare against NFS, hence scalability is limited.
This paper seems very solid and interesting to me, I like many ideas, for example, the idea of taking advantage of the cooperative caching to server client memory. However, I still have a question regarding to the future work and its limitation such that, what would be a real workloads the author most likely to measure on and how much expectation would the author prefer to see according to such workloads.
I would rate this paper 5/5(breakthrough) due to the challenging idea and how the authors implements and their measurements. It improves the old fashion server in term of performance, scalability, and availability. It could also help reduce the cost of hardware.
The Multics virtual memory: concepts and design, A. Bensoussan, C. T. Clingen and R. C. Daley, Communications of the ACM, Vol. 15, NO. 5, May 1972, pp. 308 – 318.
As we might know, the use of on-line operating systems has been growing as well as the need to share information among system users. However, they share by the use of segmentation. This motivated the authors, such that, in order to take advantage of the direct addressability of large amounts of information which made possible by large virtual memories, the authors are motivated to develop a Multics (Multiplexed Information and Computing Service) to provide a generalized basis for the direct accessing and sharing of online information. There are two goals; the first goal is it must be possible for all on-line information stored in the system to be addressed directly by a processor. Another goal is that it must be possible to control access.
Regarding to the authors contributions, the authors have introduced an idealized memory by using the segmentation and paging features of the 645 assisted by the software features. Also, to take some advantages of existing mechanism , the Multics processes and the Multics supervisor were introduced The symbolic addressing conventions technique also provide an ease of use for users, such that a user can reference a segment’s pathname and supplying the rest of the pathname according to system conventions. Moreover, by making a segment known to a process and improve the segment fault handler have given the Multics a lot of performance.
The paper’s single most noticeable deficiency is that there are too many assumptions, so it makes the readers pretty confused of how to use the features of the Multics. The conclusion of the paper should summarize what the authors have contributed and how to improve it in the future work, instead of showing of user and supervisor view points. It would be good if the authors emphasize of how the selection algorithm work. For the question according to the paper, I would like to know how much it improves from the old fashion of the concept. Lastly, I would rate the significance of the paper 3(modest) due to the fact that this paper is published 30 more years ago. It lacks of experimental and compare/contrast with the use of segmentation.