일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 | 31 |
- 파이썬
- 엔터프라이즈 블록체인
- Play2
- Adapter 패턴
- hyperledger fabric
- Golang
- 파이썬 동시성
- Akka
- 블록체인
- Play2 로 웹 개발
- 그라파나
- akka 강좌
- 파이썬 강좌
- play2 강좌
- 하이브리드앱
- 주키퍼
- 플레이프레임워크
- 이더리움
- 스칼라 강좌
- 하이퍼레저 패브릭
- play 강좌
- 파이썬 머신러닝
- 스칼라
- 스위프트
- 스칼라 동시성
- CORDA
- 안드로이드 웹뷰
- 파이썬 데이터분석
- Hyperledger fabric gossip protocol
- Actor
- Today
- Total
HAMA 블로그
Anorm 2.5 (3) - Streamming 과 Pattern 매칭 사용하기 본문
Anorm 2.5 (3) - Streamming 과 Pattern 매칭 사용하기
[하마] 이승현 (wowlsh93@gmail.com) 2016. 8. 26. 21:31Anorm 2.5 문서에서 parser API 에 대해 번역했습니다. 원문 바로가기
Streaming results
쿼리 결과는 하나의 로우씩 진행되는데 즉 메모리에 모두 적재되어서 진행하지 않습니다.
다음 예제는 country 테이블의 행들의 숫자를 카운팅 하는 예 입니다.
val countryCount: Either[List[Throwable], Long] =
SQL"Select count(*) as c from Country".fold(0L) { (c, _) => c + 1 }
성공여부에 따라서 Long 이 오른쪽에 , 에러의 리스트가 왼쪽에 ~
역주1) fold( 시작값 ) { (누적값, 리스트의 값) => 누적값 + 1 }
역주2) Either 는 다음 블로그글 참고 : http://coding-korea.blogspot.kr/2012/12/scala-either.html
Result can also be partially processed:
val books: Either[List[Throwable], List[String]] =
SQL("Select name from Books").foldWhile(List[String]()) { (list, row) =>
if (list.size == 100) (list -> false) // stop with `list`
else (list := row[String]("name")) -> true // continue with one more name
}
It’s possible to use a custom streaming:
import anorm.{ Cursor, Row }
@annotation.tailrec
def go(c: Option[Cursor], l: List[String]): List[String] = c match {
case Some(cursor) => {
if (l.size == 100) l // custom limit, partial processing
else {
go(cursor.next, l :+ cursor.row[String]("name"))
}
}
case _ => l
}
val books: Either[List[Throwable], List[String]] =
SQL("Select name from Books").withResult(go(_, List.empty[String]))
The parsing API can be used with streaming, using RowParser
on each cursor .row
. The previous example can be updated with row parser.
import scala.util.{ Try, Success => TrySuccess, Failure }
// bookParser: anorm.RowParser[Book]
@annotation.tailrec
def go(c: Option[Cursor], l: List[Book]): Try[List[Book]] = c match {
case Some(cursor) => {
if (l.size == 100) l // custom limit, partial processing
else {
val parsed: Try[Book] = cursor.row.as(bookParser)
parsed match {
case TrySuccess(book) => // book successfully parsed from row
go(cursor.next, l :+ book)
case Failure(f) => /* fails to parse a book */ Failure(f)
}
}
}
case _ => l
}
val books: Either[List[Throwable], Try[List[Book]]] =
SQL("Select name from Books").withResult(go(_, List.empty[Book]))
books match {
case Left(streamingErrors) => ???
case Right(Failure(parsingError)) => ???
case Right(TrySuccess(listOfBooks)) => ???
}
Iteratee
It’s possible to use Anorm along with Play Iteratees, using the following dependencies.
libraryDependencies ++= Seq(
"com.typesafe.play" %% "anorm-iteratee" % "ANORM_VERSION",
"com.typesafe.play" %% "play-iteratees" % "ITERATEES_VERSION")
For a Play application, as
play-iteratees
is provided there is no need to add this dependency.
Then the parsed results from Anorm can be turned into Enumerator
.
import java.sql.Connection
import scala.concurrent.ExecutionContext.Implicits.global
import anorm._
import play.api.libs.iteratee._
def resultAsEnumerator(implicit con: Connection): Enumerator[String] =
Iteratees.from(SQL"SELECT * FROM Test", SqlParser.scalar[String])
Multi-value support
Anorm parameter can be multi-value, like a sequence of string.
In such case, values will be prepared to be passed to JDBC.
// With default formatting (", " as separator)
SQL("SELECT * FROM Test WHERE cat IN ({categories})").
on('categories -> Seq("a", "b", "c")
// -> SELECT * FROM Test WHERE cat IN ('a', 'b', 'c')
// With custom formatting
import anorm.SeqParameter
SQL("SELECT * FROM Test t WHERE {categories}").
on('categories -> SeqParameter(
values = Seq("a", "b", "c"), separator = " OR ",
pre = "EXISTS (SELECT NULL FROM j WHERE t.id=j.id AND name=",
post = ")"))
/* ->
SELECT * FROM Test t WHERE
EXISTS (SELECT NULL FROM j WHERE t.id=j.id AND name='a')
OR EXISTS (SELECT NULL FROM j WHERE t.id=j.id AND name='b')
OR EXISTS (SELECT NULL FROM j WHERE t.id=j.id AND name='c')
*/
On purpose multi-value parameter must strictly be declared with one of supported types (List
, ’Seq,
Set,
SortedSet,
Stream,
Vectorand
SeqParameter`). Value of a subtype must be passed as parameter with supported:
val seq = IndexedSeq("a", "b", "c")
// seq is instance of Seq with inferred type IndexedSeq[String]
// Wrong
SQL"SELECT * FROM Test WHERE cat in ($seq)"
// Erroneous - No parameter conversion for IndexedSeq[T]
// Right
SQL"SELECT * FROM Test WHERE cat in (${seq: Seq[String]})"
// Right
val param: Seq[String] = seq
SQL"SELECT * FROM Test WHERE cat in ($param)"
In case parameter type is JDBC array (java.sql.Array
), its value can be passed asArray[T]
, as long as element type T
is a supported one.
val arr = Array("fr", "en", "ja")
SQL"UPDATE Test SET langs = $arr".execute()
A column can also be multi-value if its type is JDBC array (java.sql.Array
), then it can be mapped to either array or list (Array[T]
or List[T]
), provided type of element (T
) is also supported in column mapping.
import anorm.SQL
import anorm.SqlParser.{ scalar, * }
// array and element parser
import anorm.Column.{ columnToArray, stringToArray }
val res: List[Array[String]] =
SQL("SELECT str_arr FROM tbl").as(scalar[Array[String]].*)
Convenient parsing functions is also provided for arrays with
SqlParser.array[T](...)
andSqlParser.list[T](...)
.
Batch update
When you need to execute SQL statement several times with different arguments, batch query can be used (e.g. to execute a batch of insertions).
import anorm.BatchSql
val batch = BatchSql(
"INSERT INTO books(title, author) VALUES({title}, {author})",
Seq[NamedParameter]("title" -> "Play 2 for Scala",
"author" -> "Peter Hilton"),
Seq[NamedParameter]("title" -> "Learning Play! Framework 2",
"author" -> "Andy Petrella"))
val res: Array[Int] = batch.execute() // array of update count
Batch update must be called with at least one list of parameter. If a batch is executed with the mandatory first list of parameter being empty (e.g.
Nil
), only one statement will be executed (without parameter), which is equivalent toSQL(statement).executeUpdate()
.
Edge cases
Type of parameter value should be visible, to be properly set on SQL statement.
Using value as Any
, explicitly or due to erasure, leads to compilation error No implicit view available from Any => anorm.ParameterValue
.
// Wrong #1
val p: Any = "strAsAny"
SQL("SELECT * FROM test WHERE id={id}").
on('id -> p) // Erroneous - No conversion Any => ParameterValue
// Right #1
val p = "strAsString"
SQL("SELECT * FROM test WHERE id={id}").on('id -> p)
// Wrong #2
val ps = Seq("a", "b", 3) // inferred as Seq[Any]
SQL("SELECT * FROM test WHERE (a={a} AND b={b}) OR c={c}").
on('a -> ps(0), // ps(0) - No conversion Any => ParameterValue
'b -> ps(1),
'c -> ps(2))
// Right #2
val ps = Seq[anorm.ParameterValue]("a", "b", 3) // Seq[ParameterValue]
SQL("SELECT * FROM test WHERE (a={a} AND b={b}) OR c={c}").
on('a -> ps(0), 'b -> ps(1), 'c -> ps(2))
// Wrong #3
val ts = Seq( // Seq[(String -> Any)] due to _2
"a" -> "1", "b" -> "2", "c" -> 3)
val nps: Seq[NamedParameter] = ts map { t =>
val p: NamedParameter = t; p
// Erroneous - no conversion (String,Any) => NamedParameter
}
SQL("SELECT * FROM test WHERE (a={a} AND b={b}) OR c={c}").on(nps :_*)
// Right #3
val nps = Seq[NamedParameter]( // Tuples as NamedParameter before Any
"a" -> "1", "b" -> "2", "c" -> 3)
SQL("SELECT * FROM test WHERE (a={a} AND b={b}) OR c={c}").
on(nps: _*) // Fail - no conversion (String,Any) => NamedParameter
In some cases, some JDBC drivers returns a result set positioned on the first row rather than before this first row (e.g. stored procedured with Oracle JDBC driver).
To handle such edge-case, .withResultSetOnFirstRow(true)
can be used as following.
SQL("EXEC stored_proc {arg}").on("arg" -> "val").withResultSetOnFirstRow(true)
SQL"""EXEC stored_proc ${"val"}""".withResultSetOnFirstRow(true)
SQL"INSERT INTO dict(term, definition) VALUES ($term, $definition)".
withResultSetOnFirstRow(true).executeInsert()
// Also needed on executeInsert for such driver,
// as a ResultSet is returned in this case for the generated keys
Using Pattern Matching
You can also use Pattern Matching to match and extract the Row
content. In this case the column name doesn’t matter. Only the order and the type of the parameters is used to match.
The following example transforms each row to the correct Scala type:
import java.sql.Connection
import anorm._
trait Country
case class SmallCountry(name:String) extends Country
case class BigCountry(name:String) extends Country
case object France extends Country
val patternParser = RowParser[Country] {
case Row("France", _) => Success(France)
case Row(name:String, pop:Int) if (pop > 1000000) => Success(BigCountry(name))
case Row(name:String, _) => Success(SmallCountry(name))
case row => Error(TypeDoesNotMatch(s"unexpected: $row"))
}
def countries(implicit con: Connection): List[Country] =
SQL("SELECT name,population FROM Country WHERE id = {i}").
on("i" -> "id").as(patternParser.*)
Using for-comprehension
Row parser can be defined as for-comprehension, working with SQL result type. It can be useful when working with lot of column, possibly to work around case class limit.
import anorm.SqlParser.{ str, int }
val parser = for {
a <- str("colA")
b <- int("colB")
} yield (a -> b)
val parsed: (String, Int) = SELECT("SELECT * FROM Test").as(parser.single)
Retrieving data along with execution context
Moreover data, query execution involves context information like SQL warnings that may be raised (and may be fatal or not), especially when working with stored SQL procedure.
Way to get context information along with query data is to use executeQuery()
:
import anorm.SqlQueryResult
val res: SqlQueryResult = SQL("EXEC stored_proc {code}").
on('code -> code).executeQuery()
// Check execution context (there warnings) before going on
val str: Option[String] =
res.statementWarning match {
case Some(warning) =>
warning.printStackTrace()
None
case _ => res.as(scalar[String].singleOpt) // go on row parsing
}
Working with optional/nullable values
If a column in database can contain Null
values, you need to parse it as an Option
type.
For example, the indepYear
of the Country
table is nullable, so you need to match it asOption[Int]
:
case class Info(name: String, year: Option[Int])
val parser = str("name") ~ get[Option[Int]]("indepYear") map {
case n ~ y => Info(n, y)
}
val res: List[Info] = SQL("Select name,indepYear from Country").as(parser.*)
If you try to match this column as Int
it won’t be able to parse Null
values. Suppose you try to retrieve the column content as Int
directly from the dictionary:
SQL("Select name,indepYear from Country")().map { row =>
row[String]("name") -> row[Int]("indepYear")
}
This will produce an UnexpectedNullableFound(COUNTRY.INDEPYEAR)
exception if it encounters a null value, so you need to map it properly to an Option[Int]
.
A nullable parameter is also passed as Option[T]
, T
being parameter base type (seeParameters section thereafter).
Passing directly
None
for a NULL value is not supported, as inferred asOption[Nothing]
(Nothing
being unsafe for a parameter value). In this case,Option.empty[T]
must be used.
// OK:
SQL("INSERT INTO Test(title) VALUES({title})").on("title" -> Some("Title"))
val title1 = Some("Title1")
SQL("INSERT INTO Test(title) VALUES({title})").on("title" -> title1)
val title2: Option[String] = None
// None inferred as Option[String] on assignment
SQL("INSERT INTO Test(title) VALUES({title})").on("title" -> title2)
// Not OK:
SQL("INSERT INTO Test(title) VALUES({title})").on("title" -> None)
// OK:
SQL"INSERT INTO Test(title) VALUES(${Option.empty[String]})"
'PlayFramework2' 카테고리의 다른 글
Play2.4 로 웹 개발 시작하기 - (1) 설치 및 프로젝트 만들기 (0) | 2016.08.27 |
---|---|
Anorm 2.5 (4) - Parser API 사용하기 (0) | 2016.08.26 |
Anorm 2.5 (2) - Anorm 시작하기 (0) | 2016.08.26 |
Anorm 2.5 (1) - Anorm 이란 ? (0) | 2016.08.26 |
play2 에서 Anorm 으로 PostgreSQL 사용하기 (0) | 2016.08.24 |