Well that’s it, finally retired.
Have a Great Christmas and a Happy New Year
p.s This site will shut down in the coming weeks.
Well that’s it, finally retired.
Have a Great Christmas and a Happy New Year
p.s This site will shut down in the coming weeks.
A year without a post!!!
Not good, where did those 12 months go?
We’ll be back with some posts in the New Year.
In the meantime have a Merry Christmas and wishing you all a Happy New Year.
Hoping to get back to creating more entries in the New Year.
Merry Christmas to all and see you in 2015.
With e-Publishing now well established, two up-coming exhibitions will again provide a measure of how the Publishing Industry is performing.
IPEX 2014 open at the East London Excel exhibition centre on the 24th March and runs till the 29th. Running on a four year cycle, this is a return to London for IPEX, but it expects to receive many new visitors that may not have previously made the trip up to its previous location at the NEC in Birmingham. Covering everything about the manufacturing technology & the delivery of print and e-publications, IPEX will be aiming to deliver a lot of valuable content to its visitors within the walls of Excel.
The London Book Fair takes place on the 8th, 9th & 10th April at Earls Court. Although now just three days in length, it will aim to pack as much as it can into those three days and with its focused Publishing for Digital Minds Conference and expanded Tech Central dedicated area, the London Book Fair looks like it is on track to deliver something for everyone in the Publishing arena.
I came across this post on the Scriptorium blog a few days ago, where there was a discussion about the humble PDF and its place as an ongoing viable deliverable in today’s publishing world.
Written in June, I thought that many of the points were true ‘then’, but six months is a long time in publishing, no where more so than in the world of Aviation and in particular with regards to the Electronic Flying Bag (EFB).
EFB’s have only been around for a short while, but in many cases are now replacing the myriad of paper-based Flying Manuals that Airline/Aircraft pilots previously had to carry on board with them each time they took off, some of which were specific only to that flight.
Early paper replacements were indeed PDF based, but as many of the early users soon found, scrolling & zooming in/out while trying to find the relevant information did not create a very good User Experience.
With the ever increasing availability of mobile devices from Smartphones to Tablets that Airlines are now able to obtain and the regulatory organisations willingness to sanction these devices – under different Classifications, the move to a more ‘data’ driven truly interactive EFB has accelerated during 2013.
As we prepare to enter 2014, there are a increasing number of new companies offering EFB’s across a wider array of platforms. With more advanced elements of HTML5 increasingly in use unlocking many EFB’s from their previous ‘walled-garden’ vendor approaches, the flying aviator now has many more choices with which to improve their flying User Experience.
Does that mean that the PDF Chart is a thing of the past? Well, according to one ‘senior’ pilot I talked with recently, he still likes to keep a few PDF’s on his own personal Tablet; just in case!
It is estimated that there up to 7,000 different languages in use around the globe and although English is widely spoken and written there may be times when you need to insert non-English characters into documents.
I’ve had a number of repeat conversations with various people over the last few weeks around Unicode and what codes are required for various non-English language characters to display in today’s reading devices & browsers, with that in mind I thought I would collate a list of the resources that I had shared with them.
This post will not be an in-depth Unicode tutorial, I leave that to more expert people than I, but hopefully it will help guide people in the right direction. Of course I am open to comments and suggestions to improve the information contained in this post.
Basically, Unicode is a computing industry standard for the consistent encoding, representation and handling of text expressed in most of the world’s writing systems. It was developed in conjunction with the Universal Character Set standard & is published as The Unicode Standard. The latest version of Unicode contains a listing of more than 100,000 language characters covering 100 scripts. A more in-depth description of the above can found here & here.
Unicode is an accepted standard and has been adopted by many organisations such as Apple, HP, IBM, Microsoft, Oracle, SAP and many others. However, the level of Unicode support in the browser (particularly in older versions) you are using to view this post and whether or not you have the necessary fonts installed, may result in you having display problems for some of the characters, particularly with complex scripts such as Arabic. One day maybe all Unicode characters will display properly across all devices.
When I want to use the exclamation mark ‘!’ in a line of text. I just type it and providing that all of the ‘writing’ software (in this case WordPress) and the ‘reading’ device you are using to read this post support Unicode then they will simply display the exclamation mark ‘!’ correctly.
Behind the scenes, Unicode is using the Decimal number 32/Hexadecimal number 0021 for the exclamation mark. Like wise, if I type a Commercial AT ‘@’ symbol this is represented by the Unicode Decimal number 64 /Hexadecimal number 0040.
These characters along with most of the common Latin based (European Language) characters are grouped under the Unicode names of Basic Latin, Latin 1 Supplement, Latin Extended-A and Latin Extended-B and most of the time you will see right characters correctly displayed.
This is where you may have to start looking for the right Unicode value for the character you want to use and good place to start is by looking at the Unicode Character Charts, which are divided up into Scripts and Symbols. Another way of finding the right code is by using the Unicode Character Name Index where an alphabetical index can be used to locate the right character.
A helpful Where is my Character? page gives some more background information about finding the right Unicode character.
This excellent HTML5 site that has some very easy to use ways of finding the right Unicode. It’s the one that I go to first if I’m looking for a Unicode value.
As we mentioned above, providing that the creation software and reading software supports Unicode (and allowing for any special font you may need), then you should be able to see the right character displayed.
For example there is an button ‘Ω‘ in the WordPress editor that allows me to add a number of special characters (that WordPress have pulled together) into the post content that are not on my normal UK keyboard, for example the Thorn Symbol Þ, a Greek Sigma Σ, a Lozenge ◊ or an accented É.
However, there might be a character that I want to use in this post that WordPress does not have available from a keyboard point of view and this is where Unicode can step in and solve the problem.
As mentioned previously, Unicode represents characters by using a unique value which can be entered using its relevant Digital or Hexadecimal number. To enter a decimal numeric character code you will need to add an ampersand ‘&’ and a hash ‘#’ at the front and a semi-colon ‘;’ at the end. If you want to use Hexadecimal numbers, you will need to add an ampersand ‘&’, a hash ‘#’ and an x at the front and a semi-colon ‘;’ at the end.
If I wanted to use the Japanese Hiragana letter A, I would type in the WordPress text editing window the Unicode value
あ(dec) for the correct character あ or I would type the Unicode value
あ(hex) for the correct character あ to display when I go back to the WordPress visual editor window.
Again if I wanted to use the Thai digit 7 then I would type either
๗(hex) to get the right character ๗ or ๗ to display. A further example would be where I use the Unicode values codes
葉(hex) to give me the Traditional Chinese character “Leaf” 葉 or 葉.
One final example would be where I might want to use the Arabic letter Yeh with a Hamza above, this requires me to use the Unicode values
ئ(hex) to display it as ئ or ئ.
There are many other examples I could use, but suffice to say, it really is about the actual characters that you want to use that you need to find the right codes for.
I’ve put together a number of resource links that I think will help you understand Unicode and its use. Of course if you find one that you think I should have included then let me know and I’ll update this post.
We been keeping an eye on the progress of the changes to UK copyrigh legislation around the subject of Orphan Works.
A second batch of drafts have now been released for review covering exceptions for data analysis for non-commercial research, and amendment of exceptions for education, research, libraries and archives. A draft for disability exceptions is still not available, but should be released in a few weeks.
The IPO is asking for feedback on these drafts, the closing date for written comments being the 2nd August 2013 to their main address. The IPO is also holding a series of open meetings (location unknown at this time) during the w/c 22nd July 2013. More details from firstname.lastname@example.org.
Interested in what these changes might mean for you if you’re an SME, then here is a short guide on what is changing and the possible impact it migh have for your business.
The subject of the changes to UK copyright and how it will impact orphan works continues to draw commentary from across a wide range of publications, we have collected a few here for review.
We post any additional new information over the coming weeks.
There has been quite a recent flurry of on-line discussions/posts (some quite erroneous) about ‘Orphan Works’ and how they will be affected by impending changes to UK legislation. Namely the Enterprise and Regulatory Reform Act 2013 and the Copyright, Designs and Patents Act 1988.
One interesting aspect about the licensing ‘Orphan Works’ is that of the ‘diligent search’, this is where any institution wishing to use an ‘Orphan Work’ must first carry out a ‘diligent search’ for the owner using ‘appropriate sources’ such as independent authorising bodies, whoever these may be. There have been a number of challenges to this method including from the photographic industry, who point out that ‘metadata’ is regularly ‘stripped’ from digital files that they supply for use making their copyrighted work new ‘Orphan Works’.
The Government has subsequently made some clarifications about this subject via the publication by the UK Governments IPO of a set of FAQ’s.
Although still not totally clear what the detailed legislation will contain, the subject of ‘Orphan Works’ – both existing or newly created via the stripped ‘metadata’ method will continue to keep the discussions at the top of the UK Copyright Law agenda.
I’ve put together a set of Resource Links below that are following these important changes to UK Copyright Law in much more depth and with more knowledge than I ever could.
I’ll post more further updates here as and when I hear about them.
There’s been quite a lot of discussion in the last few weeks about eBooks and their respective formats, from Bill McCoy’s (IDPF’s Executive Director) The Seven Deadly Myths of Digital Publishing and associated responses, Bill McCoy is Wrong – Epub3 Isn’t Ready from The Digital Reader to a report of a study from the European and International Booksellers Federation(EBIF).
The EBIF’s study report entitled On the Interoperability of eBook Formats tries to understand why when you buy a book that has been physically printed it’s yours to take where you like and to do pretty much what you want with it under the First Sale Doctrine 1, 2, 3 – that subject for another post) and yet when you purchase an electronic book or eBook you are locked into one of the many different eReading platforms that abound across Europe.
The EBIF goes one to say “that if you can open a document on different computers, so why not an ebook on different platforms and in different apps?” I think that a bit of a simplistic comparison even for 2013, but I understand the point. It is very annoying that an eBook that one may have purchased from Amazon or from Kobo or from Apple still can’t be read on each of the others eReading platform devices, after all it is 2013!
The EBIF study looks into ways to reach true interoperability accross eBook formats and their DRM schemes(though this in itself is a subject of much discussion). It’s an interesting report which stretches to some 52 pages and reveals some details that I have not come across previously. But does it offer any way forward.
The EBIF comes up with a number of findings:
The report reaches a number of conclusions, including that they see the latest IDPF ePub format, ePub3 as being the main vehicle to deliver cross-platform eBook interoperability. This would require both Apple and Amazon to move in a similar technical direction! However, as we have seen many times over the past few years, much can change in the eBook market in a blink of the eye. Apple has already made a move towards supporting ePub3 – It’s Official: iBooks Now Supports Epub3, Amazon on the other hand still seems to be hanging back and focusing on their own formats for the time being.
The EBIF report is to be part a wider review of the EU Digital Agenda – A Europe 2020 Initiative for Digital Business across Europe. Maybe the EU needs to develop legislation backing one eBook format (and there is no reason why that should not be ePub3) that all publishing should support.
Yes, that could be seen as direct government interference, but as with the work that the EU did in bringing down mobile phone company roaming charges over the years, perhaps this is another area that needs some gentle EU guidance. Only time will tell.
As I mentioned in a previous post, I had a couple of discussions at the London Book Fair about HTML5 replacing XML first workflows.
At the time I thought it was just a small discussion, but today I had a long conversation with one publisher that seems intent on moving away from XML to an HTML5 first workflow. Interestingly, they cited similar reasons discussed at LBF, that of over complexities and time-scales of their XML workflows.
They also mentioned that finding in-house XML expertise was becoming more difficult as their own technical staff had not been trained on the XML implementation and their two XML ‘experts’ were about to leave the company. Everyone else only ‘knew’ HTML!
I was not involved in their original move ‘to’ XML, but I do know that their XML workflow does have some issues, but none that are not fixable. However, they seem to have really focused in on HTML5 as the way to go to help them reduce and keep costs down in the future, for example by not having to bring in any future replacement XML expertise.
They are in the process of internally developing an HTML5 based workflow, which I hope to see soon, though of course they seem very guarded about the detail. I still feel that the XML workflow issues can be resolved. But in this case they seem to have lost all confidence in getting back on track with XML
With the abilities of HTML increasing I wonder if there are many other similar publishers that are thinking the same? We will see after some future discussions.