I tried for several days to somehow make HTC Sync (the latest and greatest Version: 2.0.18, that should work with Windows 7, but does not) connect to my HTC Hero. Finally, I gave up and decided to use Windows XP workstation for ROM upgrade. The reason I decided to upgrade from ROM 1.76.405.6 to 2.73.405.5 was sluggish screen navigation. Barely noticeable but annoying, nevertheless.
Here it is what I did to upgrade ROM on HTC Hero:
- made a backup of my files on SD card
- downloaded and installed the latest HTC Sync Version 2.0.18 on my Windows XP SP3 Workstation [protected with UPS, of course]
- connectec HTC Hero to Windows XP workstation with HTC Sync
- downloaded and installed ROM Upgrade for HTC Hero, Version 2.73.405.5, after I read accompanying readme. Update was done in approximately 4 minutes.
- new ROM means that you’ll need to repeat registration process (make sure you note down APN data of your mobile provider, otherwise you’ll not have data access to Google!) and make some minor tweaks, such as pick your default ring tone, setup wallpaper, re-create bookmarks, thumb pictures of your favorite people, enter your stocks in Yahoo Finance! etc. I hope some day HTC will provide a better backup method than limited HTC Sync with calendar and contacts backup. And that is not all, you’ll also need a fresh installation of all the applications that you installed from Android Market.
[Unfortunately, I found out about Android excellent Astro File Manager application too late to use it to backup my old applications. Sic.]
Installation of applications can be time consuming and error prone process. For example my attempt to install my “old” applications failed miserably – hanging with “Starting download…” messages. I found on the net several hints about rectifying stuck download, what worked in my case was to clear the cache:
Menu->Settings->Applications->Manage Applications->then scroll down and select the Market and "Clear cache".
Since my hands were already dirty, I decided to setup Dalvik Debug Monitor, primarily for taking some screen shots of my phone. Here it is:
- downloaded Android SDK, at the time of writing this blog it was android-sdk_r05-windows.zip
- I unzipped file in some temporary directory
- then I run “SDK setup.exe” from directory where I unzipped SDK (E:\DVD\HTC\android-sdk-windows)
- I checked for installation only packages that are needed for Android 1.5 platform (see below picture)
- connect your HTC Hero to USB port on your computer and check that USB debugging is turned on (Menu->Settings->Applications->Development->check USB debugging)
- run Dalvik Debug Monitor by executing ddms.bat from \android-skd-windows\tools directory. When GUI starts, select yout HTC from left upper pane, then open Device menu and select Screen capture. That’s it. Here are some screen shot examples from my HTC:
The last picture is from Astro File Manager, imho a must have application for every android based phone owner!
After a day of using HTC Hero with upgraded ROM, all I can say is that screen navigation is now smooth. Certainly an upgrade effort worth of my time.
…is apparently the German word from which Slovenian slang words “šalabajzer” and “šalabajzerstvo” are derived from. The word “Schallabweiserrei” is used to express personal discomfort with superficial and incompetent work by artisans. I was using this slang word sporadically, not really knowing it’s origin.
No one is perfect all the time; I’m a šalabajzer from time to time (definitely when I’m trying to fix some broken part in the house, car, etc., without a proper tool(s) and know-how), but as much as I hate to spit in my own plate, can’t help not to admit that IT sector is without a doubt the business area plagued with the largest number of šalabajzers per capital. Who is really to blame? POP culture?, fast-food life style?, complex technology hidden behind user friendly API’s and GUI’s that gives everyone a feeling being an “expert”?, modern IT yuppies with ties, polished shoes and posh talks — mostly selling stuff, rarely solving problems?, “experts” that “excel” with PowerPoint’s and nothing else?, business managers not really knowing their core business and processes?, phoney “managers” on all levels – they’re extremely common in all post-socialistic-communistic countries, Slovenia is not an exception?, business executives buying all this…or even belonging in some previous mentioned category? I really don’t know.
Now I’m going to fix that damn light bulb…(Fingers crossed).
I’m fresh from Tom Kyte seminar held in Zagreb from 25-26’th of January 2010 in conference center of Hotel Antunović. [Many thanks goes to Oracle Croatia for delivering for fifth (hmm…maybe sixth?) year in a row a well known and respected speaker to Zagreb. Seminar was (again) well organized and executed!]
All powerpoint files and sql scripts used by Tom are available on asktom.oracle.com in files section (look for croatia.zip).
Tom covered six sessions in his two day seminar covering:
Session 1: Top 11 things about 11g (R1)
- 1 Encrypted tablespace,
- 2 Cache more stuff,
- 3 Standby Just got better,
- 4 Real Application Testing,
- 5 Smaller more secure DMP files,
- 6 Virtual Column,
- 7 Partitioning just got better
- 8 The long awaited pivot
- 9 Flashback Data Archive
- 10 Finer Grained Dependency Tracking
- 11 OLTP Table Compressiong
At the beginning of this session Tom showed us a photo of IBM disk (likely from seventies) compared to 1GB SD card:
I would say a big difference not only in size, but also in weight. :-) What’s interesting about this photo is that it’s taken in computer museum in Slovenia (description on paper under the disk says: IBM disk za velike IBM sisteme, kapaciteta 1GB).
One interesting fact about 11g R1 is how little it is actually present in production environment. When Tom asked how many of us are using 11g in production not a single hand was raised, out of 70+! Approximate the same result as on Oracle Technology day in Slovenia (Hotel Mons) last year. I’m not convinced that this is purely because of the fact that majority are waiting for 11g R2 to be released on all platforms and then, and only then, they’ll (we’ll) all happily jump on that band wagon. It’s something bigger behind that, but about that perhaps another time.
Session 2: All about Binds
It was nice to (finally) hear Tom presenting bind variables live, after reading about them in his articles and books — …and he still shows how passionate he is about them. Despite the fact that I’m no a stranger to bind variables I learned a thing or two from this session (thanks to example case triggered by someone from the audience). Tom pointed out that his quest for reasonable bind usage will not likely end in the near future. Why? Because each year a bunch of new graduates come fresh from Universities, knowing something about first, second, third, fourth…normal form, but are at the same time clueless about importance of bind variables. Lot’s of those youngsters starts their careers by developing web based applications – meaning that those applications are likely plagued with SQL injection vulnerability (at least some).
Session 3: Storage Techniques
Tom started with a good point of how important data is. Hardware come and goes — five year old hardware is considered a history. Applications also come and go, a bit less frequently than hardware but still, apps are usually replaced every 5-10 years with something new and shiny. Guess what doesn’t change so frequently (if ever): data! That’s why databases are at the core of any application and that’s why data model and proper storage techniques are so important.
[That reminds me….complete projects fails because of the data(base) — seeing this by myself several times in the last decade. It’s astonishing to see ignorance (especially among .NET and/or Java) programmers about importance of good database design. My personal experience with “talented” .NET programmers is – the more they know about coding framework and the more they’re confident in their programming skills – the more likely the project will fail. The worst case that I was dealing with was the application developer who proposed a “farm” of servers on mid-tier on which he would pull and process data from Oracle, thus helping colleagues to “tune” the database back-end; basically leaving database to serve as a dummy dumpster for data. Yes, this was for real! Some people simply know too much and at the same time too little, they’ll likely nail the every screw they see with the hammer they have.]
In the spirit of above said Tom presented: picking a proper data types and tables (B*Tree index clusters, Hash clusters, sorted hash clusters, IOT’s), partitioning and compression.
Session 4: Effective Indexing
In this session Tom nicely explained B*Tree, Bitmap and Function Based Indexes — what they’re, how the internal structure looks like, their strengths and weaknesses, when and when not to apply particular index type etc. Overall the session was about index facts. Since indexes are perhaps one area with the most myths floating around them on the net (unfortunately even on site(s) with very high Google scores, but I can’t named them because I risk a sue;-), I liked the last part of this session – “Mythology and other interesting anti-facts…”.
Session 5: Materialized Views, Caching
This session covered materialized views from non-replication point of the view. Nonetheless, one or two details that Tom mentioned might help you better understand refresh process, no matter if you’re using mviews for replication or for other (more interesting) stuff: such as data “caching”, data “indexing”, query rewrite etc.
First, I was not aware that Oracle in 10g R1 changed the default complete refresh algorithm that was in efect in Oracle9i R2 and before.
- Oracle9i R2 and before: complete refresh = truncate table + insert /*+append*/
- Oracle10g R1 and above: complete refresh = delete + insert
So why they replaced default algorithm with more expensive one? Because Oracle support demanded from kernel developers to implement atomicity, too many customers were confused by non-transactional nature of complete refresh in 9iR2 and before. Hopefully we can override new default with parameter ATOMIC_REFRESH=>False.
Another interesting point Tom told us was about direct path inserts. Since direct path inserts are not recorded in snapshot logs, the common miss-conception is that Oracle in such cases will simply refresh all rows instead of doing fast refresh. This is wrong. Oracle is using special table (SYS.SUMDELTA$) for recording direct path operations (this is possible because direct path operation targets blocks above high water mark), so Oracle can insert ranges of rows that were inserted with direct path during fast refresh.
Query rewrite was another interesting topic covered in this session. I started using query rewrite feature in 8i (ok, I was actually goofing around with QR feature back then), then continued in 9i R2 with one of our largest database. My faith in production quality query rewrite capabilities of Oracle, beyond simple cases almost diminished by the time 10g was released (with constraints and dimensions in place, but admittedly not always with the cleanest star schema design). After listening to Tom presentation and some think-time of why query rewrites so often failed for us, I think I’ll give them another try. It’s also encouraging to know that Oracle is making kernel smarter about query rewrites from version to version. What didn’t work in 9iR2 might in 10gR2 or 11gR2. Fingers crossed.
Session 6: Reorganizing objects – when and how
The last section was basically dedicated to debunking the common myths about all sorts of reorganizations that might get you into groundhog day*. You know, that feeling that you must reorganize something, being the purpose to reclaim some space, to “improve” performance by “rebuilding” tables, indexes, introducing bigger block sizes etc. I’ll leave it to you to read all the slides from this session. It’s not that you never have to rebuild some index or reorganize some table, you might actually need to to that and you’ll be on a safe side as long as you know why is something happening and what are (could be) the consequences of reorganization. And don’t forget to measure and compare the “before” statistics with the new ones, “after” reorganization is done.
Btw. if you didn’t watch that movie yet, you better do. It’s likely you would spend your time more productive watching that movie again and again, then writing that index rebuild script that will run on midnight during the weekend. ;-)
…about Oracle is finally here. To my pleasant surprise Jože Senegačnik finally stopped resisting to write about Oracle database in his recently opened blog. I say, better late than never :-).
Jože is, among other things, co-author of the forthcoming Apress book,
“Expert Oracle Practices”, that I consider as a sequel to Oracle Insights: Tales of the Oak Table:
Now you know how you’ll start a shopping season in 2010 :-).
No, it’s not that pesky TV advert “…our tooth brush will clean 19% better than…” or “…our Magic Washing Powder will remove 31% more stains compared to our competitive product…[you know, not just 30%, it’s 31% because it should sound more scientific and convincing enough, right? :-)] “, or tons of similar adverts that expect that we’re all stupid. I though this stupidity in advertising is reserved for big chemical/pharmaceutical TV ads, I was wrong.
In late summer (August 2009) when I first read Oracle sponsored Edison Group report I just laughed. However, recently I got the link to this pathetic pamphlet from two different newsletters and decided it’s time to rumble a bit. I don’t have anything against Oracle. Hands down Oracle database is simply a better product than MS SQL 2008 and I’m glad that I work with Oracle products in general. At the same time, I do recognize and respect competitive products, such as MS SQL and DB2. All products have their strengths and weaknesses, including Oracle. That’s why I take such marketing
pamphlets reports insulting, because they expect I’m a retarded troll, the same guy who is shopping for that 19% more efficient toothbrush.
[Sponsored by: Oracle Corporation] In this whitepaper presented by the Edison Group, reveals how Oracle Database widens the manageability lead over Microsoft SQL Server 2008. The study shows how Oracle Database can save 43% in time and 41% in database management costs over Microsoft SQL Server 2008.
Quick search revealed similar sponsored
- Edison Group: Oracle Saves 43% in Database Management Cost over IBM DB2 and IBM Response to Edison Group Report
- Edison Group – Oracle Unbreakable Linux: True Enterprise-Quality Linux Support
The moral of the story is that money can (still) buy “studies” that can prove anything to anyone, hardly a surprise in today World. It’s their right to abuse such reports in marketing campaigns as it’s my right to tell them “No thanks! Sell this rubbish to someone else!”.
Now, I’m off to brag around of being 41% more efficient than my MS SQL Server colleague.[grin]