On June 7 I went to Rio de Janeiro to take a test on the next day regarding a public contest where I was disputing one of 36 vacancies for a Junior Systems Analyst - Infrastructure position at Petrobras.
The test started at 9:00 A.M. and ended at 1:00 P.M and took place at Estácio de Sá university - Campus Uruguaiana at Presidente Vargas Avenue that is located downtown. I stayed at a hotel called Planalto. It is located near the place where I did the test.
Petrobras is one of the major companies of the world. As of May 19, Petrobras became the world's sixth-largest company by market value. Its market value was $295.6 billion. Microsoft for example has a market value of $274.0 billion. You can see more details reading this new at Bloomberg site: Petrobras Tops Microsoft, Is Sixth-Biggest Company.
Petrobras's field of operation has a high demand because its principal product is petroleum. Yesterday the oil barrel achieved the highest price in history as can be read in Oil sets record above $140 a barrel on supply concerns.
Getting back to the public contest. Petrobras's test is in my humble opinion one of the hardest ones in Brazil in my area of specialization. Maybe it is the hardest one. :-) This was the second time that I participated in such public contests.
An organizing institution is responsible for the test creation and collection of the registration tax. The organizing institution responsible for the first test I did in 2007 was CESPE and this time the organizing institution was CESGRANRIO. I had to pay R$ 40,00 Brazilian Reais that is approximately $ 25,00 for the registration tax. I'm considering $1 dollar = R$ 1.60 as of the date of this post.
The test was comprised of 70 questions in the multiple choice form. From these 20 questions were about basic knowledge (BK) (10 Portuguese questions and 10 English questions). The other 50 questions were about specific knowledge (SK) related to system analysis - infrastructure.
The content that should be studied is described in the following table:
Computer Architecture and High Performance Computing HPC
RISC and CISC architectures
Processor organization
Memory organization
Concurrency concepts, parallelism and distributed computing
Flynn taxonomy
Distributed systems architecture: SMP and MPP
Basic concepts about agglomerate computing (Cluster) and grid computing (Grids)
Load balancing
Performance profiling
Project Management
Basic concepts
Resources allocation
Chronogram;
Analytical structure
Operating Systems
OS structure
Processor management
Memory management
File systems
Input and Output
Basic concepts about compilers
RAID
Databases
Data independency
Relational approach
entity-relationship approach
Triggers and Stored Procedures
SQL language
High availability concepts
Transactions management
Locks management
Performance management
Programming
Algorithms and data structures
Java code debugging
Notions about Software Engineering
Markup languages: HTML and XML
Notions about Java programming (JEE, Servelets, JSP and EJB)
IT Service Management
Concepts about the ITIL® library: Support and service delivery
COBIT processes domain
Logical Reasoning
Sentential and first order logic
Enumeration by resources
Counting: Additive and multiplicative principles
Information Security Management
General concepts
Information security policies
Information classification
Norm ISO 27001:2005
The questions were graded according to the following table:
Portuguese language
English language
Specific knowledge
Question
Points
Question
Points
Question
Points
1 to 10
1.0
11 to 20
1.0
21 to 30
1.0
31 to 40
1.3
41 to 50
1.6
51 to 60
1.9
61 to 70
2.2
Despite the difficulty I think I performed well if I take into account that this was my second try.
Today I got the final result and my final grades were:
Basic Knowledge = 15 points = 75% of BK test.
I needed 12 points = 60% of BK test.
Specific Knowledge = 41.3 points = 51.63% of SK test.
I needed 48 points = 60% of SK test.
Result = BK + SK = 15 + 41.3 = 56.3 points
48 - 41.3 = 6.7 points to pass in the specific knowledge test.
Bellow I show the grades obtained by the candidates that passed the public contest:
Specific Knowledge test grade
Classification
61.5
1
61.4
2
61.2
3
59.6
4
56.4
5
55.8
6
55.6
7
54.8
8
54.5
9
54.2
10
54.1
11
54.0
12
53.8
13
52.6
14
52.1
15
51.8
16
51.4
17
51.2
18
50.9
19
50.7
20
50.7
21
50.7
22
50.7
23
50.6
24
50.5
25
50.1
26
50.1
27
50.0
28
50.0
29
49.9
30
49.9
31
49.6
32
49.6
33
49.5
34
49.5
35
49.4
36
49.3
37
49.2
38
49.2
39
49.2
40
49.1
41
49.1
42
48.8
43
48.7
44
48.7
45
48.7
46
48.7
47
48.7
48
48.5
49
48.4
50
48.4
51
48.3
52
48.1
53
48.1
54
48.0
55
48.0
56
I almost got there. For a difference of 6.7 points I didn't pass the specific knowledge test, but I'm happy with my accomplishment. I'm a young guy and for sure if it is God's will I'll get there in the future. I'll keep studying. That's really necessary!
There are lots of public contests happening every year and not only related to Petrobras.
Some interesting facts about this public contest:
There were 5866 candidates disputing 36 vacancies for this specific role within Petrobras.
Candidate/vacancy relation = 163
Only 56 candidates were approved = 0.96% of candidates. Practically 1%.
Total collected with the registration tax money = R$ 40,00 * 5866 = R$ 234.540,00 = $ 146.587,00.
Just to close this post - you may ask: Why someone would dispute a so wanted vacancy? The answer is easy: Petrobras pays a worthy salary, for example for this specific position (Junior System Analyst) the salary value is R$4.798,64 = $ 2998.75. Here in Brazil this is a good value for someone who is beginning his career. Besides that the company has many advantages (salary bonus, paid courses, health plan, etc) if compared with others. Generally who gets a job through a public contest will end their career in the company they took a test for. It is a permanent job.
As you see only the ones that study hard get a job at such companies and such companies have the best committed employees.
On Monday, June 23rd, I went to Botafogo in Rio de Janeiro to take part in a job interview scheduled for 4:00 P.M. I arrived there 3:10 P.M. so that I could have some time to relax a little bit.
The girl that conducted the interview was Roberta Furtado de Vasconcelos. She is a psychologist working for the human department at Oi. Oi is one of the biggest telecommunications players here in Brazil.
The job interview was fair and I was asked about some of the technologies that the job requires such as object oriented programming, Unix OS, the level of proficiency regarding the English language, etc.
We talked for about 45 minutes and one of the questions that I think was difficult to answer is: How much are you willing to earn? Wouldn't it be better if the company specified how much it wants to pay? It is complicated to answer such a question given the fact that you don't know how much the company is willing to pay for that specific position. I had already looked a salary table in a famous local IT magazine site called INFO online. It helped while answering this question.
Roberta said that the job interview process is composed of two interviews. The next one is going to take place soon if I happen to pass to the second round, that is, the first interview was just one more filter.
Let's see what happens next. If it happens I'll post here some notes about the second part of the interview process.
The crescent demand for enterprise management software has as key factor big companies, which need to manage millions of records trying with that: find a better way of working with so great amount of information available in their huge databases and get to the target of their business.
Between the most used software for enterprise management are the ones that lie in one of the following main categories:
Resources planning in a company (ERP) include or try to include all the company’s data and processes in a unified system. A typical ERP system will use multiple software and hardware components to achieve integration. A key ingredient for most ERP system is the use of a unified database to store the data from the various modules of the system.
CRM
Customer relationship management uses software products developed to help the companies in the construction and keeping of a good relationship with their clients. This is done through the storage of information of each client in an efficient and intelligent way. CRM is a general term that encompasses concepts used by the companies to manage their clients and includes capture, storage and analysis of data, suppliers, partners and internal processes information.
SCM
Supply chain management is the process of planning, implementing and controlling the operations of the supply chain to the highest efficiency degree possible. SCM includes all the transport logistics and storage of raw material and ready products from the origin to the destiny consumption market.
BSC
Balanced scorecard or scoreboard is a concept used to measure if the activities of a company are yielding expected results in terms of the strategic vision. Its focus isn’t only the financial result but also human questions. It helps in the provisioning of a more comprehensive vision of an organization what in its turn helps the companies to act according to the best way possible in a long term basis.
How is it used?
All these software work integrated.
The chairman, VP, CFO, CEO, CIO, CTO, etc., simply sat down in front of their computer and request a report. The systems then combine the data stored in unified databases, which have all the company’s historical data using technologies such as OLAP and OLTP. Business intelligence (BI), artificial intelligence (AI), data mining, etc, are concepts extensively applied to do the junction of enterprise data.
It has all to do with relevant data and how to better combine them through (SQL queries) to extract useful information. Information = processed data = money! :-)
My computer engineering final project verses about LINQ (Language Integrated Query), that is, the integration of the query language (SQL) into the programming language (specifically C#). It's a good starting point for those that wanna learn the principles of working with queries to extract useful information from databases.
What to expect from the future?
The good part of it is that this market is growing in a fast pace. This way, a lot of job opportunities are emerging.
I particularly have interest in this field: databases, computer networks, BI, HPC, etc. My area of specialization will be for sure system analysis with focus on software engineering and infrastructure.
I expect to work in this field in a near future. It’s really interesting and challenging because there are always details that can be improved, optimized.
Information management (IM) is the key to the proper organization of a company. If a company organizes its data, its income will grow. It’s a proved fact.
Just as a note: one alarming factor that can arise is the security related to all the information of a company what creates the field of information security.
A new area that is attracting attention nowadays is parallel computing, most precisely, high performance computing, which is other very important factor when dealing with huge sets of data as is the case of enterprise management software.
Parallel processing will be the focus on the next years because it is necessary for the next step toward fastest software processing. Long live the dual-quad (many) core processors.
This post is related to one of the best coursework I've done during the computer engineering course. I'm really proud of it. It was the cornerstone in my programming career and helped me choose an area to put my efforts from that moment on. I was in the 5th term out of 10 (3rd year of the course out of 5 more exactly). The discipline was Programming Languages.
This subject is fantastic and is used extensively throughout the the computer science field.
First I'll give a short description about the Quicksort and Binary Search algorithms and then I'll present the work that I and my dear brother in faith Wellington Magalhães Leite did.
Quicksort
Quicksort is a well-known sorting algorithm developed by C. A. R. Hoare. Typically, quicksort is significantly faster in practice than other sorting algorithms, because its inner loop can be efficiently implemented on most architectures.
Binary Search
A binary search algorithm (or binary chop) is a technique for finding a particular value in a sorted list. It makes progressively better guesses, and closes in on the sought value by selecting the median element in a list, comparing its value to the target value, and determining if the selected value is greater than, less than, or equal to the target value. A guess that turns out to be too high becomes the new top of the list, and a guess that is too low becomes the new bottom of the list. Pursuing this strategy iteratively, it narrows the search by a factor of two each time, and finds the target value.
Our paper
Our paper is entitled Quicksort and Binary Search Algorithms. You can get a copy at the end of this post.
Without more ado, see its abstract bellow:
Sorting and searching algorithms are a core part of the computer science area. They are used throughout the programming work when you need to sort a set of data and when you need to search for a specific record (key) present in such set of data.
Quicksort is one of the fastest (quick) sorting algorithms and is most used in huge sets of data. It performs really well in such situations.
Binary search tree is one of the fastest searching algorithms and is applied in a sorted set of data. It reduces the search space by 2 in each iteration, hence its name (binary).
In this paper we present the intrinsic nature of each algorithm as well as a functional implementation of such algorithms in the C++ programming language.
Keywords: quicksort, binary search, sorting algorithms, searching algorithms, c++ programming language
CONTENTS
1 INTRODUCTION 6
1.1 Objective 6
1.2 Definition 6
1.2.1 Sorting algorithms 6
1.2.2 Searching algorithms 7
2 DEVELOPMENT 8
2.1 Quicksort algorithm (swapping and partitioning) 8
2.1.1 Detailed steps 8
2.2 Binary search algorithm 9
2.3 Studying the efficiency of the methods 9
2.3.1 The big O notation 9
2.3.2 Quicksort efficiency 10
2.3.2.1 The best and the worst case 10
2.3.2.2 Comparison with other sorting algorithms 11
2.3.3 Binary search efficiency 11
3 APPLICATION 12
3.1 Quicksort implementation 12
3.2 Binary search implementation 13
4 CONCLUSION 14
5 REFERENCES 15
Words of wisdom
As I mentioned, during the 5th term of the computer engineering course our teacher Marcus Vinicius Carvalho Guelpeli selected some sorting and searching algorithms to pass to the class as a coursework.
The coursework should be done in pairs and each pair should select a sorting and searching algorithm to compose a paper about it. We selected the quicksort and the binary search algorithms. The teacher advised us that these weren't the easy ones. I just thought: that's what I want. I don't want the easy ones. Why? Because if you get just the easy problems I'll never understand something that demands a more deep approach and every time a difficult task is given you'll tend to refuse it. What's the best thing to do? Just accept the challenge and go for it. Chances are you'll succeed. That's just what happened with us.
We haven't just written about the quicksort and the binary search, we implemented it and presented it to the class in a power point presentation. The teacher liked it so much that our grade was the highest possible! :-) In the end what did we feel? An amazing feeling. Something such that the work had been done and we learned a lot from it. That’s what a college and even an online computer science degree program is supposed to do. Give you the subjects and motivate you; teaching the basic so that you can dig up the more difficult aspects of the subject being taught.
So what's next? Well, I'll explain how we implemented the quicksort and the binary search algorithms.
Quicksort and Binary search algorithms implementation in C++
One of the things that I've always listened to was about code reuse. You search for something already implemented so that you just haven't to reinvent the wheel. Of course you'll complement or even adapt the available code to your situation. That was what we did. I found some code for the quick sort at MSDN and implemented the binary search one. Unfortunately the page at MSDN isn't available anymore. It's been three years since I hit that page.
I wanted a way of measuring the time elapsed so that we could compare the efficiency of both methods when they were fed with different input data sets. The input is nothing more than a text file .txt full of numbers in this case. Can be any data you want. Each test case we passed a text file with different random numbers and different quantity of numbers. For example, in a test case we passed a file named 2500.txt, that means 2500 random numbers. In another test case we passed other file named 7500.txt as so on. I think you got it. Doing so we could compare how well the algorithms were performing.
To generate the random numbers we used an Excel spreadsheet with the formula =RAND()*1000000. For each set of data we generated new numbers and copied and pasted those numbers into the text files that are the input for our program. During a coursework as this one we get to learn everywhere, even a new formula in Excel. It's really good. ;-)
Again, I searched for a timing class that I could reuse with the code and for sure I found it. I didn't use it at all but I used it to learn about how to measure time in C++. It's amazing how fast you can implement something. Much of the things you need related to programming are already implemented. You just have to search for it as is what you're doing here, I think! You searched for the subject of this post and here you are seeing something implemented. Try to learn from it and just don't copy the entire work and think that you know about it. It's wrong. Try to understand what the code is doing. Dive into the theory because it explains the inner essence.
The code that follows is well commented which is something every developer should do. You see, it was three years ago when we worked with this code. Today it's difficult to remember every step I took. The comments helped me to remember almost everything.
Bellow I present the quick sort method we borrowed from MSDN (we adapted it to fit our case). Note the use of the Partition method (explained in the accompanying paper):
// QuickSort implementation
void QuickSort(char** szArray, int nLower, int nUpper)
{
// Check for non-base case
if(nLower < nUpper)
{
// Split and sort partitions
int nSplit = Partition(szArray, nLower, nUpper);
QuickSort(szArray, nLower, nSplit - 1);
QuickSort(szArray, nSplit + 1, nUpper);
}
}
// QuickSort partition implementation
int Partition (char** szArray, int nLower, int nUpper)
{
// Pivot with first element
int nLeft = nLower + 1;
char* szPivot = szArray[nLower];
int nRight = nUpper;
// Partition array elements
char* szSwap;
while(nLeft <= nRight)
{
// Find item out of place
while(nLeft <= nRight && strcmp (szArray[nLeft], szPivot) <= 0)
nLeft = nLeft + 1;
while (nLeft <= nRight && strcmp (szArray[nRight], szPivot) > 0)
nRight = nRight - 1;
// Swap values if necessary
if(nLeft < nRight)
{
szSwap = szArray[nLeft];
szArray[nLeft] = szArray[nRight];
szArray[nRight] = szSwap;
nLeft = nLeft + 1;
nRight = nRight - 1;
}
}
// Move pivot element
szSwap = szArray[nLower];
szArray[nLower] = szArray[nRight];
szArray[nRight] = szSwap;
return nRight;
}
Now see the binary search method implementation that we did:
int BinarySearch(char** szArray, char key[], int nLower, int nUpper)
{
// Termination case
if(nLower > nUpper)
return 0;
int middle = (nLower + nUpper) / 2;
if(strcmp(szArray[middle], key) == 0)
return middle;
else
{
if(strcmp(szArray[middle], key) > 0)
// Search left
return BinarySearch(szArray, key, nLower, middle - 1);
// Search right
return BinarySearch(szArray, key, middle + 1, nUpper);
}
}
The next ones are the method prototypes and the main entry point that calls a menu. According to the user passed parameters we call the quicksort and the binary search methods:
// Function prototypes
void Menu(void);
void QuickSort(char** szArray, int nLower, int nUpper);
int Partition(char** szArray, int nLower, int nUpper);
int BinarySearch(char** szArray, char key[], int nLower, int nUpper);
// Application initialization
void main(void)
{
char op;
do
{
Menu();
printf("\n\nDo you wanna a new QuickSort? Y/N");
op = getche();
if(islower(op))
op = toupper(op);
}
while(op == 'Y');
}
void Menu(void)
{
// Clear screen
system("CLS");
// Control execution time
clock_t initial, final;
// Print startup banner
printf("\nQuickSort C++ Sample Application\n");
printf("Copyright (c)2001-2002 Microsoft Corporation. All rights reserved.\n\n");
printf("MSDN ACADEMIC ALLIANCE [http://www.msdnaa.net/]\n\n");
printf("BinarySearch C++ Sample Application\n");
printf("Copyright (c)2005 Leniel Braz de Oliveira Macaferi & Wellington Magalhaes Leite.\n");
printf("UBM COMPUTER ENGINEERING - 5TH SEMESTER [http://www.ubm.br/]\n\n");
// Describe program function
printf("This program example demonstrates the QuickSort and BinarySearch algorithms by\n");
printf("reading an input file, sorting its contents, writing them to a new file and\n");
printf("searching on them.\n\n");
// Prompt user for filenames
char szSrcFile[1024], szDestFile[1024];
printf("Source: ");
gets(szSrcFile);
printf("Output: ");
gets(szDestFile);
// Read contents of source file
const long nGrow = 8;
long nAlloc = nGrow;
long nSize = 0;
char** szContents = new char* [nAlloc];
char szSrcLine[1024];
FILE* pStream = fopen(szSrcFile, "rt");
while(fgets(szSrcLine, 1024, pStream))
{
// Trim newline character
char* pszCheck = szSrcLine;
while(*pszCheck != '\0')
{
if(*pszCheck == '\n' && *(pszCheck + 1) == '\0')
*pszCheck = '\0';
pszCheck++;
}
// Append to array
szContents[nSize] = new char [strlen(szSrcLine) + 1];
strcpy(szContents[nSize], szSrcLine);
nSize = nSize + 1;
if(nSize % nGrow == 0)
{
// Resize the array
char** szPrev = szContents;
nAlloc += nGrow;
szContents = new char* [nAlloc];
memcpy(szContents, szPrev, nSize * sizeof(char*));
delete szPrev;
}
}
fclose (pStream);
initial = clock();
// Pass to QuickSort function
QuickSort(szContents, 0, nSize - 1);
final = clock();
// Write sorted lines
pStream = fopen (szDestFile, "wt");
for(int nIndex = 0; nIndex < nSize; nIndex++)
{
// Write line to output file
fprintf (pStream, "%s\n", szContents[nIndex]);
}
fclose (pStream);
// Report program success
printf("\nThe sorted lines have been written to the output file.\n\n");
// QuickSort execution time
double duration = (double)(final - initial) / CLOCKS_PER_SEC;
printf("The QuickSort execution time was: %2.9lf s = %.0lf ms = %.0lf \xE6s\n\n", duration, duration * 1000, duration * 1000000);
char op = '\0';
do
{
printf("Do you wanna a BinarySearch to locate a specific key? Y/N");
op = getche();
if(islower(op))
op = toupper(op);
if(op == 'Y')
{
printf("\n\nType the key you want to search for: ");
char key[1024];
gets(key);
initial = clock();
if(BinarySearch(szContents, key, 0, nSize - 1))
{
final = clock();
duration = (double)(final - initial) / CLOCKS_PER_SEC;
printf("\nKey found!\n\n");
printf("The BinarySearch execution time was: %2.9lf s = %.0lf ms = %.0lf \xE6s\n\n", duration, duration * 1000, duration * 1000000);
}
else
{
final = clock();
duration = (double)(final - initial) / CLOCKS_PER_SEC;
printf("\nKey not found!\n\n");
printf("The BinarySearch execution time was: %2.9lf s = %.0lf ms = %.0lf \xE6s\n\n", duration, duration * 1000, duration * 1000000);
}
}
else
{
// Deallocate entire array
for(int nIndex = 0; nIndex < nSize; nIndex++)
// Delete current array element
delete szContents[nIndex];
delete szContents;
szContents = NULL;
}
}
while(op == 'Y');
}
Enter the name of a file that contains unsorted data;
Use the sample files included in the .ZIP package as: 1000.txt and 2500.txt;
In the command line "Source", type: 1000.txt;
In the command line "Output", type a name to the file that will be sorted. e.g.: sorted.txt;
After the sorting process, choose if you want or not to execute a Binary Search. If yes, provide a value to be searched. If not, choose if it is or not desired to execute a new Quicksort.
Postscript:
- To generate random numbers, use the file Random numbers generator.xls file;
- The file QuicksortBinarySearch.cpp contains the source code. The same can be used freely. Mention the authors.
Efficiency comparison
For the sake of comparison I've run some test cases with different input files. See the result in the table that follows:
Quicksort and Binary search performance
n
File name
File size (bytes)
Timing (milliseconds)
Quicksort
Binary search
10000
10000.txt
122.880
16
0
25000
25000.txt
200.704
78
0
50000
50000.txt
401.408
219
0
75000
75000.txt
602.112
360
0
100000
100000.txt
802.816
516
0
It's important to note that the time the quicksort takes appears to be longer but it is not. Why? Because the the program needs to read the file content and write the sorted data back to the output file so that it appears to take longer than the milliseconds shown on the above table. The timing functions just operate while the quicksort is running.
For the the binary search key I've input a value localized in the beginning of the sorted file, in the middle and in the end. There was no time changes. The binary search found the key I entered with a time less than (0 µs - microsecond). I have an AMD Athlon XP 2400 with 512 MB RAM.
SnagIt is one of few software products that really amazes me. Why? Because it lets me get any kind of screenshot.
With SnagIt I'm able to capture, edit, and share in this blog or wherever exactly what I see on my screen. I can even record any screen activity on my desktop.
I've been a loyal user of SnagIt for more than 4 years now and I recommend it for anyone interested in getting screenshots from their computer screen.
The range of possibilities when it comes to capturing a screenshot with SnagIt is vast.
In the picture bellow you can see the different options/profiles you can chose to capture a screenshot:
To capture the above picture I used the Window profile.
You can capture just a region of the screen as a menu bar or task bar, the entire window, the full screen, a scrolling window or web page that doesn't fit in the window and can even keep the web page links. There are various profiles to choose from.
You can include the mouse cursor in the screenshot so that it's visible what is the action you are emphasizing when capturing the screenshot.
You can output the screenshot to the clipboard, a file, e-mail, to office programs, etc. The output types available are the ones shown on the following picture:
When you're ready, just click on the big read circle with a camera in the main SnagIt window. After doing it SnagIt gets the screenshot and takes you to the SnagIt Editor where it's possible to make customizations on the screenshot just captured.
See the SnagIt Editor:
It's possible to highlight specific areas of the screenshot, add effects and actions to the screenshot and change the size and the color of the same.
SnagIt is a rich product with lots of features. It's impossible to cover all of them in just a blog post. The better thing to get started is to use it to explore all its richness. Once you're familiar with it you'll take advantage of it.
Where do I get SnagIt? SnagIt is produced and commercialized by TechSmith that specializes in software products for screen capturing/recording in ways that help us "the users" to communicate more clearly, create engaging presentations for diverse audiences, and analyze product usability and experience.
One of its other products is Camtasia Studio that easily record the screen to create training, demo, and presentation videos. It's a must have to those that frequently need to record instructional videos as is the case of IT professionals.
If you want to use Blogger’s built in function take a look in this question at StackOverflow. Updated on 08-10-2010
I just wanted to backup my blogger posts. I searched for a tool that could automate the process and fortunately I found a pretty good piece of software that does just that. Its name is Blogger Backup Utility.
The software enables you to backup your posts with a high degree of customization. You can backup all the blogs you have. Each one will have its own backup settings.
You can choose if you want to save posts' comments, in what format (one AtomXML file per post or all the posts in a single file) to save the posts, if you want only the most recent posts or the ones included in the specified data range.
See the screenshot of the main window bellow:
Clicking on the button Backup File Naming you'll have the chance of specifying the naming options for the backup files.
There are to configurable options: Folder Name Options and Post File Name Options.
In Folder Name Options you can configure the directory structure in which your posts will be saved.
In Post File Name Options you can configure the name of each post.
In both Folder Name and Post File Name, you can chose from a diverse array of patterns to form the name of the directory structure and posts.
See the screenshot of the Backup File Naming Options:
After setting up your blog configurations you can click the button Backup Post in the Main Window.
A progress bar on the status bar and list of processed posts will show you the backup process.
The inverse process is also possible, that is, to restore your blog posts, just click on Restore Posts in the Main Windows.
It's really simple, fast and efficient. It does what it's meant to do.
The app only backups in the Atom file format that is an XML file.
Bellow is the structure of the XML that represents the backup copy of this post:
The Blogger Backup utility is intended to be a simple utility to backup to local disk your Blogger posts.
Using the GData C# Library, the utility will walk backward in time, from your latest post to your last, saving each post to a local Atom/XML file.
I congratulated him and wrote on the project's page at CodePlex that the addition of the HTML format when saving the posts would be a good feature in case someone wanted to save the posts in an HTML fashion instead of XML.