subject

16.50. Suppose we have a sequential (ordered) file of 100,000 records where each record is 240 bytes. Assume that B = 2,400 bytes, s = 16 ms, rd = 8.3 ms, and btt = 0.8 ms. Suppose we want to make X independent random record reads from the file. We could make X random block reads or we could perform one exhaustive read of the entire file looking for those X records. The question is to decide when it would be more efficient to perform one exhaustive read of the entire file than to perform X individual random reads. That is, what is the value for X when an exhaustive read of the file is more efficient than random X reads? Develop this as a function of X.

ansver
Answers: 2

Another question on Computers and Technology

question
Computers and Technology, 23.06.2019 16:00
Helen is having a meeting with her colleagues in her company. they are working on the goals and objectives for the coming year. they want to ensure that these goals and objectives of the processes involved are properly evaluated. which system can helen and her colleagues apply to evaluate this? helen and her colleagues require a blank to evaluate the goals and objectives.
Answers: 2
question
Computers and Technology, 24.06.2019 22:10
In command prompt, whats a command that will list only .ini files in c: \windows\system32 directory?
Answers: 1
question
Computers and Technology, 25.06.2019 15:30
What is the function of a computer screen?
Answers: 1
question
Computers and Technology, 25.06.2019 16:00
Nasa’s long term goal is for travel in space to be as as travel across the atlantic. however, we are away from that.
Answers: 1
You know the right answer?
16.50. Suppose we have a sequential (ordered) file of 100,000 records where each record is 240 bytes...
Questions
Questions on the website: 13722361